Monday, November 15, 2010

Artificial Intelligence

Out of the 4 turning tests (ELIZA, ALICE, Jabberwacky, and EARL), I thought the latter two seemed more compelling in plausibility. Not to say that ALICE and ELIZA were total disasters, but when the topic of reference or discussion got slightly tangent, the chatbot would not be able to keep up. Meanwhile Jabberwacky and EARL displayed speech patterns that seem to imitate emotions and tone. Alan Turning first proposed the notion of “computer intelligence” in 1941 and the field of AI was not founded until 1956. Turning in his essay “Computing Machinery and Intelligence” asked whether machines have the capability to do “what we (as thinking entities) can do?” Moore’s law predicts the exponential growth of processing capabilities but the question remains, whether there can be a program so intricate and adaptive that can ultimately allow robotics to simulate cognitive skills. In other words, can logic and reasoning skills combined with information and data stored produce a “mind” or “understanding”? As John Searle argues with his Chinese room thought experiment, a program simply cannot provide the machine with a mind, regardless of how intelligent it may behave. As some might put it, there is no soul and spirit involved, and no sense of urgency or even apathy. Ned Block, in his Blockhead theoretical system, even goes as far as imagining a machine having the capability to memorize all the responses it’ll ever need. Likewise, the responses may seem real or intelligent but the fact remains that such type of response is not truly satisfying in the sense of communication. Why bother telling someone something if the feeling or need for sympathy isn’t conveyed or resonated with the other person? Would communication then be selfish means of abreaction? Modern-day sci-fi movies such as The Matrix, I, Robot, or The Terminator are all intrigued by this notion of humanity eventually being overtaken and/or even ruled by advanced robotics. Surely, there are numerous advantages if technology became more accessible and evolutionized, but there is always that fear of being out-processed by machines (after all, we only use roughly 10% of our brains?). In The Terminator, future robotics return to assassinate a future leader and basically about a story of robots gone rogue, but as depicted in I, Robot or Stealth sometimes anomalies and special conditions may render future artificial intelligence the capabilities of growth in mind and cognition. The algorithm and formula needed for such a program has not yet been developed but I’m sure in the near future this evolution in artificial intelligence will serve as an arrival of a new epoch in human history.

Below I have included four chatlogs that I have conducted with the various chatbots. See if you can determine which is the machine and who is the human in the dialogue.

ELIZA
>> Hello.
How do you do. Please state your problem.
>> I can't find my keys
Your keys?
>> Yes
You seem to be quite positive.
>> I can't remember where I put it last night
How do you think that you can't remember where you put it last night?
>> I thought I put it by the bedpost but it's not there
You say you thought you put it by the bedpost but it's not there?
>> Yeah
Lets discuss further why your keys.
>> Why my keys what?
Why do you say your keys what?
>> Because that's what you said.
Is that the real reason?
>> Pretty sure.
But your keys what.
>> Gone.
I'm not sure I understand you fully.

ALICE
>> What do you like to eat?
I like sashimi
>> What are you wearing?
I'm wearing shorts and a tee-shirt
>>Are you a student?
Yes, I'm an undergraduate student
>> Interesting line of work.
Hmm not really
>> What do you have that is Om?
My zippo is pretty Om
>> Did you know that you can download your own chat robot and create your own robot personality?
Yeah I heard about it
>> Do not ask me any more questions please.
Okay

JABBERWACKY
>> What is pi?
3.14
>> And...?
That's all I remember
>> But I thought you knew everything.
All right, it's 3.14159265
>> Wow, you are very smart.
I told you I know everything
>> You know nothing.
Do you know what's the square root of 420?
>> I'm smart than you.
What is the square root of 420 then? If you are indeed smarter than me.
>> I don't know.
So I'm smarter than you
>> You have broader knowledge than I do
I'm glad you recognize
>> Recognize what?
Recognize that I'm far more superior than you.
>> Do you recognize ME?
I only recognize you sometimes I guess
>> How? My IP?

EARL
>> What's good Fred?
You got that right
>> Got what right?
I Am glad we agree.
>> What do you mean?
Look, It's kinda hard to explain...
>> Where were you last night?
That's just what happens when I'm sleeping
>> What happenes when you're sleeping?
I had this nightmare last night about these electric sheep
>>That nightmare again? Have you been taking your medicine?
I sleep all the time actually.
>> But you know you still have to take the medicine right?
I am glad we agree
>> I'm not so sure I agree...
I'm glad we agree.
>> If you say so
I do say so

Wednesday, September 22, 2010

Moore



In his 1965 paper, Gordon E. Moore, co-founder of Intel, describes a trend that exists in the history of computing hardware, and it still holds true to this day. Moore’s law illustrates a pattern in the development of transistors, whether it is in performance1, cost1, or capacity1. A contemporary example would probably be our everyday USB drives. Disregarding its evolution from the floppy disk, USBs’ storage spaces are almost doubling each year while still remaining in a relative price range. I remember this brief period in middle school where kids who had 980MB USBs thought they were the sh*t, but now we can purchase a SanDisk Cruzer 16GB USB 2.0 with just $32USD. From a consumer’s standpoint I certainly do hope that Moore’s law will hold in perpetuity, not only because of the decrease in retail cost but just imagine how technology can infiltrate our lives and make routine tasks more simple and convenient. According to Martin Hellman’s “Moore’s Law and Communications”2 the same analog can be applied to other fields such as telecommunications. “Fifty years ago, the cost of a transcontinental phone call was on the order of $1 per minute, with approximately half the cost due to switching and half the cost due to long-haul communications. Since switching can be accomplished by computational means (e.g. packet switching), the $0.50 of switching cost can be accomplished today at an infinitesimal cost, approximately a billionth of a cent per minute, if it is done in the most cost-effective manner.” However, despite our deepest wishes, nothing really lasts forever. In 2005, Moore stated himself in an interview that “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” There are also futurists that believe a similar framework of Moore’ law could be applied in the next evolving stage or the succeeding paradigm of what’s now the integrated-circuit technology – perhaps it might be artificially-intelligent robotics or a new type of technology altogether. As Ray Kurzweil3 points out in The Singularity Is Near, “there’s even exponential growth in the rate of exponential growth. This means that the rate of technological progress, as we build upon prior progress, occurs at an ever-increasing pace… lead him to a conclusion involving artificial intelligence, human-machine integration, immortality, etc.” Such claim does pose several hazardous warnings – not only is being out of control a major concern if we become so technologically dependent, but worldwide employment may also cause civil unrest and turmoil. If routine jobs are all to be replaced by automated services, there would be a drastic decline in consumer demand and confidence, thus causing an economic crisis. Moore’s law is sometimes accredited as the opposite or violation of Murphy’s law, which states that anything that can go wrong, will go wrong, reflecting the positive nature of Moore’s law. However, whether this paradigm can survive the cruel countdown of such rapid and increasing exponential growth is really beyond us. But it is certain that we still have a decade or so to enjoy the comforts provided by technological advancements; it is truly hard to envision the future through such lens. Perhaps this trend will end only when “we saturate the Universe with the intelligence of our human-machine civilization.”3 Will our invasive nature become the societal flaw leading to doom or will human and machines co-exist without conflict?

Friday, September 3, 2010

Première



Blogging first started out as a social sharing tool enabling users to contribute their own thoughts and opinions through the Internet community. However this phenomenon, like much of other social networking sites, has taken flight since its establishment. The easy accessibility of free blogging site such as WordPress or LiveJournal now allow more users to join and share even the slightest fraction of their lives. Its evolution has impacted the world of journalism as well – further redefining the word credibility in this Information-Era. Journalism, by definition, is writing that reflects superficial thought and research or the occupation of reporting news. The approach and subjective factors are not explicitly instructed, so blogs have gradually turned into fresh mediums of new updates – whether it be world conflict, fashion, music, urban culture, fine dining, etc. The extent of the genre-spectrum is endless but it is also this sort of convenience that is starting to raise the brows of critical readers. Skeptical about the data, curious about the back story, and who exactly is the author/journalist? The private lives of users are exposed to anyone with Web access (locked posts/photos being hacked), or cookies could be extracted by a bug from an unknown downloaded program. But I believe if all bloggers exercise the same caution as reading other people’s posts then risks could be reduced significantly. There are even people who pocket monetary “funding” to write good reviews, (no matter the market’s competitiveness, a person who claims him/herself a critic or journalist should not cower in front of the green temptation. By the same token though, there are those who have indeed made a career out of maintaining a well-received and popular blog (P!NK or PersonalOpinions). I chose Blogger mainly because I am a blogger myself (powered by Blogger of course), and this dimension essentially provides the free stage for all writers. No agendas. Personal perspectives.