Universal Outcome Conundrum
- Ayaan_817
- Posts: 26
- Joined: May 1st, 2020, 1:22 am
Universal Outcome Conundrum
The machine includes features that allow the user to enter conditions to control the tasks ϵ Tn with any combination of tasks required to achieve x.
For example: A1 enters a condition in the machine that the tasks belonging to Tn should only contain ‘n’ number of tasks or only tasks x, y.
Keeping in mind the multiverse theory, there are infinite possible combinations of tasks in set Tn.
Case 1
User A1 enters a condition that to achieve x, he must only do task t1 and not do t2 where x can be simply making an omelette and t1 can be peeling bananas and t2 can be all the conventionally necessary tasks in order to make an omelette.
It is obvious that x cannot be achieved unless t2 is performed regardless of whether t1 is being executed or not.
How can this be possible when there can be infinite combinations of tasks in a set Tn?
Case 2
User A1 enters a condition that to achieve x where x ≠ nothing, the tasks in set T1 must contain 0 tasks.
How can x be achieved when no task is being performed?
The above case formulates that 0 = y where y ≠ 0.
- Terrapin Station
- Posts: 6227
- Joined: August 23rd, 2016, 3:00 pm
- Favorite Philosopher: Bertrand Russell and WVO Quine
- Location: NYC Man
Re: Universal Outcome Conundrum
How is A1 supposed to build such a machine if A1 doesn't know the steps he needs to take to achieve x?Ayaan_817 wrote: ↑May 1st, 2020, 2:29 am Suppose that a person A1 wants to achieve an object x in his life. The person A1 has no way of knowing the steps he needs to take to achieve x and so not to take a risk, A1 builds a machine that can tell A1 all the necessary steps he needs to take in order to achieve x.
That sentence isn't very clear. "Control the tasks (ϵ Tn) with any combination of tasks" doesn't really make sense.The machine includes features that allow the user to enter conditions to control the tasks ϵ Tn with any combination of tasks required to achieve x.
Huh? How would that follow? I suppose you're saying that the tasks could be any arbitrary thing? (Why would you assume that, though?)For example: A1 enters a condition in the machine that the tasks belonging to Tn should only contain ‘n’ number of tasks or only tasks x, y.
Keeping in mind the multiverse theory, there are infinite possible combinations of tasks in set Tn.
So you're asking "If we have an infinity of arbitrary tasks that can be assigned as T1 and T2, how can it be possible that in no case, T1 amounts to making an omelette?" I just want to clarify if that's what this is supposed to amount to.
Case 1
User A1 enters a condition that to achieve x, he must only do task t1 and not do t2 where x can be simply making an omelette and t1 can be peeling bananas and t2 can be all the conventionally necessary tasks in order to make an omelette. It is obvious that x cannot be achieved unless t2 is performed regardless of whether t1 is being executed or not.
How can this be possible when there can be infinite combinations of tasks in a set Tn?
Huh? Where are you getting y from in the first place?Case 2
User A1 enters a condition that to achieve x where x ≠ nothing, the tasks in set T1 must contain 0 tasks.
How can x be achieved when no task is being performed?
The above case formulates that 0 = y where y ≠ 0.
At any rate, if you're not performing any task and you want to do nothing, then that should work.
- Ayaan_817
- Posts: 26
- Joined: May 1st, 2020, 1:22 am
Re: Universal Outcome Conundrum
I didn't say that x is the machine and even if you're assuming x to be that machine, this is the conjecture!How is A1 supposed to build such a machine if A1 doesn't know the steps he needs to take to achieve x?
It means the tasks belonging to Tn(It is a set... I couldn't write the 'n' in subscript) can be controlled using the machine, that is, the quantity of the tasks or the tasks themselves in set Tn.That sentence isn't very clear. "Control the tasks (ϵ Tn) with any combination of tasks" doesn't really make sense.
I'm talking about a specific case, or set, where the tasks are t1(peeling a banana) and so on.So you're asking "If we have an infinity of arbitrary tasks that can be assigned as T1 and T2, how can it be possible that in no case, T1 amounts to making an omelette?" I just want to clarify if that's what this is supposed to amount to.
Yes, they could be because according to me, the set can contain any combination of tasks, arbitrary or not.I suppose you're saying that the tasks could be any arbitrary thing? (Why would you assume that, though?)
- Terrapin Station
- Posts: 6227
- Joined: August 23rd, 2016, 3:00 pm
- Favorite Philosopher: Bertrand Russell and WVO Quine
- Location: NYC Man
Re: Universal Outcome Conundrum
I didn't write anything about x being the machine.
You said that A1 doesn't know how to achieve x.
Yet you said that A1 builds a machine that tells him how to achieve x.
How does A1 build a machine that's going to tell him how to achieve x if he has no idea how to achieve x?
Say that you have no idea how to play guitar. Well, how in the world are you supposed to build a machine that's going to tell you how to play guitar if you don't know how to play guitar?
Let's just settle this part first.
- Ayaan_817
- Posts: 26
- Joined: May 1st, 2020, 1:22 am
Re: Universal Outcome Conundrum
Ever heard of machine-learning? I admit I don’t know a lot about it but as far as I know, you’re supposed to enter samples and other basic code into an AI program and then it learns by itself gradually.Terrapin Station wrote: ↑May 1st, 2020, 10:24 am
I didn't write anything about x being the machine.
You said that A1 doesn't know how to achieve x.
Yet you said that A1 builds a machine that tells him how to achieve x.
How does A1 build a machine that's going to tell him how to achieve x if he has no idea how to achieve x?
Say that you have no idea how to play guitar. Well, how in the world are you supposed to build a machine that's going to tell you how to play guitar if you don't know how to play guitar?
Let's just settle this part first.
Since A1 has an idea of what ‘x’ is, then A1 can surely build or find some samples of ‘x’, either physical or multi-media and convert them to a medium that the AI program can understand.
And in the conditions, I said that A1 doesn’t has no idea how to achieve x, but let’s assume that A1 knows coding.
- Terrapin Station
- Posts: 6227
- Joined: August 23rd, 2016, 3:00 pm
- Favorite Philosopher: Bertrand Russell and WVO Quine
- Location: NYC Man
Re: Universal Outcome Conundrum
I don't buy that we have any sort of AI yet that can tell someone how to do something without someone doing programming that knows how to do at least the sort of thing (if not the specific thing) in question. There are probably adaptive programs, but they'd only work if the programmer has a pretty good idea just what sort of procedures need to be followed and maintained during the adaptations, which would require knowing how to do the type of task in question.Ayaan_817 wrote: ↑May 2nd, 2020, 12:14 pmEver heard of machine-learning? I admit I don’t know a lot about it but as far as I know, you’re supposed to enter samples and other basic code into an AI program and then it learns by itself gradually.Terrapin Station wrote: ↑May 1st, 2020, 10:24 am
I didn't write anything about x being the machine.
You said that A1 doesn't know how to achieve x.
Yet you said that A1 builds a machine that tells him how to achieve x.
How does A1 build a machine that's going to tell him how to achieve x if he has no idea how to achieve x?
Say that you have no idea how to play guitar. Well, how in the world are you supposed to build a machine that's going to tell you how to play guitar if you don't know how to play guitar?
Let's just settle this part first.
Since A1 has an idea of what ‘x’ is, then A1 can surely build or find some samples of ‘x’, either physical or multi-media and convert them to a medium that the AI program can understand.
And in the conditions, I said that A1 doesn’t has no idea how to achieve x, but let’s assume that A1 knows coding.
- Ayaan_817
- Posts: 26
- Joined: May 1st, 2020, 1:22 am
Re: Universal Outcome Conundrum
On the birth(or death, I don't remember) anniversary of Bach(the composer), Google(hope you've heard of it) made a doodle which used AI to convert random musical sheets created by a user, and altered and tweaked it a bit to make it like a Bach composition. The program used almost 300(again i'm not sure) Bach compositions inputted in the code by the programmers, and analysed them to do what it was supposed to.I don't buy that we have any sort of AI yet that can tell someone how to do something without someone doing programming that knows how to do at least the sort of thing (if not the specific thing) in question. There are probably adaptive programs, but they'd only work if the programmer has a pretty good idea just what sort of procedures need to be followed and maintained during the adaptations, which would require knowing how to do the type of task in question.
So, the programmer doesn't need to necessarily know how to do the task, he/she just needs to know the task.
2024 Philosophy Books of the Month
2023 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023