We know every computational algorithms can be made by Turing Machine. I assume that I have Turing Machine M that verifies the Turing Machine N correctly implements the description w, on input <N,w>. In other words, M accepts <N,w> if Turing Machine N correctly implements the algorithm based on description w. I shall try to run M on input <M,w> that w would be the description "M is the Turing Machine that implements the description w correctly". The proof is to reduce Acceptance Problem of Turing Machine to this.
1. On input <A,w>, create a new string input w' = "A is the Turing Machine that correctly accepts string w".
2. Run Turing Machine M on input, <A,w'>
3. Accept if M accepts, reject otherwise.
Since Acceptance Problem of Turing Machine is not recursive, so is this.
Therefore, there is no algorithmic procedure to make an algorithm.
Consider the question in the movie "Ex Machina". Is it possible to make a Artificial Intelligence(AI) program have consciousness. Before consideration in detailed, here is a specific example of "consciousness". Suppose you are having a lunch. You know the procedure how to pick up a pizza, and put it into your month. And, you also know you can stop having lunch, and leave the restaurant. However, if you make an AI program being able to have lunch, the program would not be able to leave the restaurant. Thus, what I mean by "an AI having consciousness" is that the AI program is aware of what she's doing, and she have an option not to do what she is doing. Now we know there is no algorithmic procedure to figure out a given algorithm is right or wrong on input description w. Therefore, no algorithm can create an algorithm on input description w. Furthermore, no algorithm can describe what a given algorithm does on input algorithm M.
It is proved that we can't make an AI that has consciousness. I have two questions at this point: so is not possible to create an AI people imagine? even human being has consciousness?
My opinion to the first question is that at least is not possible for Turing Machine. Imagine the future. Is human civilization still going to use Turing Machine as their main computational theory? if not, what will it be?
Second, we don't know why we are living. We don't know what we are doing? This proposition has been a long question to many philosophers. What I think is that only the amount of brain accepts is allowed to have consciousness.
Consider the question in the movie "Ex Machina". Is it possible to make a Artificial Intelligence(AI) program have consciousness. Before consideration in detailed, here is a specific example of "consciousness". Suppose you are having a lunch. You know the procedure how to pick up a pizza, and put it into your month. And, you also know you can stop having lunch, and leave the restaurant. However, if you make an AI program being able to have lunch, the program would not be able to leave the restaurant. Thus, what I mean by "an AI having consciousness" is that the AI program is aware of what she's doing, and she have an option not to do what she is doing. Now we know there is no algorithmic procedure to figure out a given algorithm is right or wrong on input description w. Therefore, no algorithm can create an algorithm on input description w. Furthermore, no algorithm can describe what a given algorithm does on input algorithm M.
It is proved that we can't make an AI that has consciousness. I have two questions at this point: so is not possible to create an AI people imagine? even human being has consciousness?
My opinion to the first question is that at least is not possible for Turing Machine. Imagine the future. Is human civilization still going to use Turing Machine as their main computational theory? if not, what will it be?
Second, we don't know why we are living. We don't know what we are doing? This proposition has been a long question to many philosophers. What I think is that only the amount of brain accepts is allowed to have consciousness.
재밌는 주제네요.
ReplyDelete근데 전 'Turing Machine에 기반한 AI는 자각을 가질 수 없다'에 대한 증명에 동의하지 않아요.
글쓴분은 "튜링머신 하에서 procedure가 주어진 input w에서 잘 동작하는지 correctly하게 알 수 없다" 라는 증명에서
AI는 conscious해질 수 없다는 주장을 이끌어내요.
하지만 사람도 "어떤 procedure가 주어진 input w에서 잘 동작하는지 correctly하게 알아내는" 능력이 없잖아요.
따라서 글쓴분의 consciousness정의에 따르면 자각은 사람도 못가진 것 아닌가요? (글쓴분의 2번째 질문이죠.)
따라서 저는 consciousness에 대한 기준을 낮춰야한다고 생각하고
사람도 튜링머신 AI도 conscious해질수 있을 것 같다고 생각해요.
안녕하세요 co2meal님 ㅋㅋㅋㅋ
ReplyDelete엄청난 통찰력이시네요. 네 저도 그렇게 생각해요.
인간이 컴퓨터보다 깊은 consciousness가지고 있을뿐,
완전한 의미의 consciousness 인간에게도 불가능하다고 생각합니다.
consciousness에 대한 기준을 낮춘다면 co2meal님 말대로 가능해질수도 있겠네요.
근대 저는 consciousness를 그대로 두고, 여기서 더 고민해 보고 싶네요.
정말 consciousness가 불가능한지, 불가능하다면 어떤 형태이면 가능해지는지?