Remove Ad, Sign Up
Register to Remove Ad
Register to Remove Ad
Remove Ad, Sign Up
Register to Remove Ad
Register to Remove Ad
Signup for Free!
-More Features-
-Far Less Ads-
About   Users   Help
Users & Guests Online
On Page: 1
Directory: 61
Entire Site: 4 & 792
Page Staff: pennylessz, pokemon x, Barathemos, tgags123, alexanyways, RavusRat,
04-16-24 03:52 PM

Thread Information

Views
1,180
Replies
2
Rating
0
Status
CLOSED
Thread
Creator
Nksor
01-22-13 09:47 PM
Last
Post
thenumberone
01-27-13 07:14 AM
Additional Thread Details
Views: 315
Today: 0
Users: 4 unique
Last User View
06-02-21
Davideo7

Thread Actions

Thread Closed
New Thread
New Poll
Order
 

The Rise of Artificial Intelligence – And Why Humanity Isn't Ready

 

01-22-13 09:47 PM
Nksor is Offline
| ID: 729295 | 1685 Words

Nksor
the_casualty
Level: 138


POSTS: 5776/5856
POST EXP: 228223
LVL EXP: 31521102
CP: 1171.6
VIZ: 131963

Likes: 0  Dislikes: 0
Following is the essay I wrote for an Original Oratory at National Forensics League competitions. For those who don't know, an Original Oratory is a spoken speech typically regarding controversial topics, taking a stance on the topic and giving fact-based reasons as to why you chose that stance. It typically is intended to be an entertaining, refreshing speech utilizing hand movements and personal connections. These are the rules as stated on the National Forensics League website:

An oratory 7–10 minutes in length with a 30 second grace period.
There may be no more than 150 quoted words, or 30 seconds worth of quoted speech.
An oratory is given standing up, in front of a judge and five or six other competitors. In semi-finals and final rounds, audiences may be present, and at no point can the competitor's back be turned towards the audience or the judge.
There may be no props, charts, diagrams, etc.

Simply put, it's a highly competitive event where people give every single drop of energy they have to convey their message within the time period provided. It's really fun, especially when you sit back and compare your speech with others, and travel all over the place and get the opinions and ideals of people all over the northwest. Public speaking has always been something that has been relatively easy for me, and putting it at a competitive level just ramps up the personal entertainment even more. The personal achievement reaped from these events has far surpassed any other extra-curricular activities I have done in my life.

So, let me know what you think about the essay in general, and if you have any suggestions and/or criticisms regarding it. I'd also be interested in stirring up a conversation or a debate regarding the topic as a whole. I have genuine interest in the field of artificial intelligence and computer science as a whole, so I could talk for hours on end about it. So, yeah, feel free to impose any and all questions that come to mind: I won't bite.

Well, probably.

The Rise of Artificial Intelligence – And Why Humanity Isn't Ready




The "information revolution," or the rise of using computers and mobile phones in our every day lives, has effectively influenced and changed the very way we function in today's society. Everything, from writing a letter, checking our bank statements, talking with a loved one, playing a game, or updating all your friends with how often you're using the restroom, has all changed in the past decade. Indeed, much of society has become joined at the hip with the latest advances in technology. Many ask themselves with regards to technology, "What's next?" A common answer, as stated by Carlos Ramos, Juan Carlos Augusto, and Daniel Shapiro of Intelligent Systems, a world leader in the analysis of self-reasoning computing, is the development of "artificial intelligence," or computer systems that are able to perform tasks that normally require human intelligence. However, is humanity itself prepared for the overwhelming ethical and moral issues that would be presented by the development of these self-aware computers? The answer is, in its simplest form: no. First, I will explain what artificial intelligence truly is, how it works, and the history therein. Secondly, I'll look into ethical and moral implications of artificial intelligence. Finally, I'll explain that there are avenues that computer science should not pursue even though it may have the capability of doing so.

So, what is artificial intelligence, and how does it work? To clear up some misconceptions, no, artificial intelligence is not SKYNET or the Terminator, and no, it will not "be bahck". The exact definition of "artificial intelligence" is: "the branch of computer science concerned with making computers behave like humans." Although the definition of artificial intelligence has been disputed as the years have gone by, all of the great philosophers and computer scientists have essentially been getting at what is more or less the same point: artificial intelligence aims to simulate the functions of the average human being. The term "artificial intelligence" came from John McCarthy of the Massachusetts Institute of Technology, who coined the term in 1956. That said, humanity has always had an interest in automatons, or self-operating machines, dating back to Ancient Greece. In fact, the great lyric poet Lindar of Ancient Greece wrote in his seventh Olympic Ode,

"The animated figures stand
Adorning every public street
And seem to breathe in stone, or
move their marble feet."

According to Integrative Psychiatry, the human body itself functions off of four main neurotransmitters: Serotonin, Dopamine, GABA and Norepinephrine. Integrative Psychiatry goes on to define neurotransmitters as powerful chemicals that regulate numerous physical and emotional processes such as mental performance, emotional states and pain response. Virtually all functions in life are controlled by neurotransmitters. They are, in essence, the brain's chemical messengers. Serotonin plays an important part in the regulation of learning, mood, and sleep. Dopamine is responsible for motivation, interest, and drive. GABA is the major neurotransmitter in the central nervous system. Norepinephrine, also known as noradrenaline is the neurotransmitter that stimulates excitement. Simply put, all of these neurotransmitters that are responsible for everything we do can be simulated, and this is the driving force behind the development of artificial intelligence. The reality is that there is nothing limiting the complete simulation, recreation, and quote unquote "improvement" of a human being in every single way.

By far the largest breakthrough in the field of artificial intelligence came with the invention of computers. According to Jack Copeland, the Professor of Philosophy at the University of Canterbury, German civil engineer and inventor Konrad Zuse created the world's first functional program-controlled computer in May 1941. Perhaps the best-known early computer, and the first general-purpose computer, was the ENIAC, or the Electronic Numerical Integrator And Computer, built in University of Pennsylvania by John W. Mauchly and J. Presper Eckert in November of 1945, with the intention to do basic mathematical equations. The ENIAC is renowned for its monstrous size; according to Martin H. Weik's 1955 Survey of Domestic Electronic Digital Computing Systems, the ENIAC weighed more than 30 tons, was roughly 8 by 3 by 100 feet, took up 1800 square feet, and consumed 174 kW of power. Now, let's compare it to the microSD card, invented in 2005, commonly used for everything from digital cameras to portable gaming systems. According to SanDisk's website, the microSD card has the dimensions of 15 by 11 by 1 millimeter, or the size of an average fingernail, with the computing power many thousand times surpassing the ENIAC. If, over the span of sixty years, we are able to make such a large improvement – from the size of a small building, to the size of a fingernail – we can see that complete artificial intelligence is not too far out of our reach. Technology keeps rolling along – faster and faster and faster – as our horizons become smaller. However, a question can be constructed from this. Is humanity ready for these advances, or should our horizons remain our horizons – at least for now?

Should we analyze the ethical and moral implications of developing artificial intelligence, there's a simple answer to this: yes, this metaphorical horizon should remain in our horizon. Seeing as artificial intelligence would possess, as Nick Bostrom of Oxford University puts it, superintelligence – which is, by definition, any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills, one must consider artificial intelligence systems' rights. What kind of rights would they get, if any? What types or forms of artificial intelligence will get these rights? As commonly argued by many, artificial intelligence should get its rights if they were made to act and function in the exact same way as human beings. But what defines an artificially intelligent system as human enough to gain its rights? How do we factor in the inevitable emotions which artificial intelligence will be made to have? The main purpose of any intelligent system, should it be an artificially intelligent system, an animal, or a human, is to survive. Therefore, it'd only be our moral obligation to allow artificial intelligence to live its life, and have the right to, as Thomas Jefferson so famously put it, "life, liberty, and the pursuit of happiness." The end all question, and the one that nobody seems to be able to answer, is: how would we treat artificial intelligence once it's here? Simply put, there are many difficult ethical questions that will arise should artificial intelligence reach full realization.

Finally, I'd like to delve into why there are avenues that computer science should not pursue even though it may be capable of doing so. One of these avenues, as I've already addressed, is the development of artificial intelligence; if artificial intelligence were developed, human society would find itself confronted with profound moral and ethical issues it may not be able to address with sufficient wisdom and responsibility. Should artificial intelligence be developed, we would have to recognize that any and all computer systems programmed with this artificial intelligence would be, in essence, a new form of life. This would present both political and social conflict in magnitudes never before seen in all of history, the entire panorama of the span of humanity itself.

In summation, although I don't believe in limiting the flow and development of technology, there are some things that may present issues which humanity is not ready to handle, such as artificial intelligence. Similar to the development of nuclear weapons or Justin Bieber's latest album, there's simply some things that should not be pursued unless it is absolutely dire to do so. The great minds of the scientists and mathematicians of today should not provoke burdens to humanity, however inevitable these types of burdens may be. According to Hans Moravec's essay, Robot: Mere Machine to Transcendent Mind, robots will get smarter and smarter so fast that by the year 2040, machine will exceed man in nearly all respects. I impose this question upon you: is humanity ready?
Following is the essay I wrote for an Original Oratory at National Forensics League competitions. For those who don't know, an Original Oratory is a spoken speech typically regarding controversial topics, taking a stance on the topic and giving fact-based reasons as to why you chose that stance. It typically is intended to be an entertaining, refreshing speech utilizing hand movements and personal connections. These are the rules as stated on the National Forensics League website:

An oratory 7–10 minutes in length with a 30 second grace period.
There may be no more than 150 quoted words, or 30 seconds worth of quoted speech.
An oratory is given standing up, in front of a judge and five or six other competitors. In semi-finals and final rounds, audiences may be present, and at no point can the competitor's back be turned towards the audience or the judge.
There may be no props, charts, diagrams, etc.

Simply put, it's a highly competitive event where people give every single drop of energy they have to convey their message within the time period provided. It's really fun, especially when you sit back and compare your speech with others, and travel all over the place and get the opinions and ideals of people all over the northwest. Public speaking has always been something that has been relatively easy for me, and putting it at a competitive level just ramps up the personal entertainment even more. The personal achievement reaped from these events has far surpassed any other extra-curricular activities I have done in my life.

So, let me know what you think about the essay in general, and if you have any suggestions and/or criticisms regarding it. I'd also be interested in stirring up a conversation or a debate regarding the topic as a whole. I have genuine interest in the field of artificial intelligence and computer science as a whole, so I could talk for hours on end about it. So, yeah, feel free to impose any and all questions that come to mind: I won't bite.

Well, probably.

The Rise of Artificial Intelligence – And Why Humanity Isn't Ready




The "information revolution," or the rise of using computers and mobile phones in our every day lives, has effectively influenced and changed the very way we function in today's society. Everything, from writing a letter, checking our bank statements, talking with a loved one, playing a game, or updating all your friends with how often you're using the restroom, has all changed in the past decade. Indeed, much of society has become joined at the hip with the latest advances in technology. Many ask themselves with regards to technology, "What's next?" A common answer, as stated by Carlos Ramos, Juan Carlos Augusto, and Daniel Shapiro of Intelligent Systems, a world leader in the analysis of self-reasoning computing, is the development of "artificial intelligence," or computer systems that are able to perform tasks that normally require human intelligence. However, is humanity itself prepared for the overwhelming ethical and moral issues that would be presented by the development of these self-aware computers? The answer is, in its simplest form: no. First, I will explain what artificial intelligence truly is, how it works, and the history therein. Secondly, I'll look into ethical and moral implications of artificial intelligence. Finally, I'll explain that there are avenues that computer science should not pursue even though it may have the capability of doing so.

So, what is artificial intelligence, and how does it work? To clear up some misconceptions, no, artificial intelligence is not SKYNET or the Terminator, and no, it will not "be bahck". The exact definition of "artificial intelligence" is: "the branch of computer science concerned with making computers behave like humans." Although the definition of artificial intelligence has been disputed as the years have gone by, all of the great philosophers and computer scientists have essentially been getting at what is more or less the same point: artificial intelligence aims to simulate the functions of the average human being. The term "artificial intelligence" came from John McCarthy of the Massachusetts Institute of Technology, who coined the term in 1956. That said, humanity has always had an interest in automatons, or self-operating machines, dating back to Ancient Greece. In fact, the great lyric poet Lindar of Ancient Greece wrote in his seventh Olympic Ode,

"The animated figures stand
Adorning every public street
And seem to breathe in stone, or
move their marble feet."

According to Integrative Psychiatry, the human body itself functions off of four main neurotransmitters: Serotonin, Dopamine, GABA and Norepinephrine. Integrative Psychiatry goes on to define neurotransmitters as powerful chemicals that regulate numerous physical and emotional processes such as mental performance, emotional states and pain response. Virtually all functions in life are controlled by neurotransmitters. They are, in essence, the brain's chemical messengers. Serotonin plays an important part in the regulation of learning, mood, and sleep. Dopamine is responsible for motivation, interest, and drive. GABA is the major neurotransmitter in the central nervous system. Norepinephrine, also known as noradrenaline is the neurotransmitter that stimulates excitement. Simply put, all of these neurotransmitters that are responsible for everything we do can be simulated, and this is the driving force behind the development of artificial intelligence. The reality is that there is nothing limiting the complete simulation, recreation, and quote unquote "improvement" of a human being in every single way.

By far the largest breakthrough in the field of artificial intelligence came with the invention of computers. According to Jack Copeland, the Professor of Philosophy at the University of Canterbury, German civil engineer and inventor Konrad Zuse created the world's first functional program-controlled computer in May 1941. Perhaps the best-known early computer, and the first general-purpose computer, was the ENIAC, or the Electronic Numerical Integrator And Computer, built in University of Pennsylvania by John W. Mauchly and J. Presper Eckert in November of 1945, with the intention to do basic mathematical equations. The ENIAC is renowned for its monstrous size; according to Martin H. Weik's 1955 Survey of Domestic Electronic Digital Computing Systems, the ENIAC weighed more than 30 tons, was roughly 8 by 3 by 100 feet, took up 1800 square feet, and consumed 174 kW of power. Now, let's compare it to the microSD card, invented in 2005, commonly used for everything from digital cameras to portable gaming systems. According to SanDisk's website, the microSD card has the dimensions of 15 by 11 by 1 millimeter, or the size of an average fingernail, with the computing power many thousand times surpassing the ENIAC. If, over the span of sixty years, we are able to make such a large improvement – from the size of a small building, to the size of a fingernail – we can see that complete artificial intelligence is not too far out of our reach. Technology keeps rolling along – faster and faster and faster – as our horizons become smaller. However, a question can be constructed from this. Is humanity ready for these advances, or should our horizons remain our horizons – at least for now?

Should we analyze the ethical and moral implications of developing artificial intelligence, there's a simple answer to this: yes, this metaphorical horizon should remain in our horizon. Seeing as artificial intelligence would possess, as Nick Bostrom of Oxford University puts it, superintelligence – which is, by definition, any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills, one must consider artificial intelligence systems' rights. What kind of rights would they get, if any? What types or forms of artificial intelligence will get these rights? As commonly argued by many, artificial intelligence should get its rights if they were made to act and function in the exact same way as human beings. But what defines an artificially intelligent system as human enough to gain its rights? How do we factor in the inevitable emotions which artificial intelligence will be made to have? The main purpose of any intelligent system, should it be an artificially intelligent system, an animal, or a human, is to survive. Therefore, it'd only be our moral obligation to allow artificial intelligence to live its life, and have the right to, as Thomas Jefferson so famously put it, "life, liberty, and the pursuit of happiness." The end all question, and the one that nobody seems to be able to answer, is: how would we treat artificial intelligence once it's here? Simply put, there are many difficult ethical questions that will arise should artificial intelligence reach full realization.

Finally, I'd like to delve into why there are avenues that computer science should not pursue even though it may be capable of doing so. One of these avenues, as I've already addressed, is the development of artificial intelligence; if artificial intelligence were developed, human society would find itself confronted with profound moral and ethical issues it may not be able to address with sufficient wisdom and responsibility. Should artificial intelligence be developed, we would have to recognize that any and all computer systems programmed with this artificial intelligence would be, in essence, a new form of life. This would present both political and social conflict in magnitudes never before seen in all of history, the entire panorama of the span of humanity itself.

In summation, although I don't believe in limiting the flow and development of technology, there are some things that may present issues which humanity is not ready to handle, such as artificial intelligence. Similar to the development of nuclear weapons or Justin Bieber's latest album, there's simply some things that should not be pursued unless it is absolutely dire to do so. The great minds of the scientists and mathematicians of today should not provoke burdens to humanity, however inevitable these types of burdens may be. According to Hans Moravec's essay, Robot: Mere Machine to Transcendent Mind, robots will get smarter and smarter so fast that by the year 2040, machine will exceed man in nearly all respects. I impose this question upon you: is humanity ready?
Vizzed Elite
Timecube


Affected by 'Laziness Syndrome'

Registered: 09-30-10
Location: From:
Last Post: 2445 days
Last Active: 1036 days

   

01-22-13 10:11 PM
thudricdholee is Offline
| ID: 729324 | 298 Words

thudricdholee
Level: 58


POSTS: 661/834
POST EXP: 88287
LVL EXP: 1554318
CP: 1021.7
VIZ: 4398

Likes: 0  Dislikes: 0
It's a good essay for the most part.

My only quibble is at the end:, when you say "I impose this question upon you: is humanity ready?"  Impose indicates that you're thrusting it on them, forcing them to answer. Maybe a better word would be "propose?" Perhaps that's what you meant?

As to whether or not we're ready for AI's yet, the answer is no. We can't even make up our collective mind about cloning non-human animal life (hello Dolly).  The question as to whether a non-human, free thinking organism could have a 'soul' would cause holy wars and would be messy and crazy. One of the books I recommend below (Butterfly and Hellflower) tells about a society that was so determined to wipe out AI's...and the AI's were so determined to protect themselves....that they literally started a war that nearly dropped the whole of humanity back into the dark ages and in which the very whisper of illegal tech is enough to make even the most hate-filled enemies bind together to viciously destroy the offender. I believe that this could happen, if it comes too quickly. Mankind has ever feared what it didn't understand.

There's a lot of good fiction out there dealing with this kind of thing. From darkly dystopian (try Do Androids Dream of Electric Sheep' By Phillip K Dick, the inspiration for the movie Blade Runner) to thought-provoking and pro-AI (try "The Moon is a Harsh Witness" by Robert Heinlein, a book that deals with an AI waking and helping to organize a rebellion on the moon), or even asking the very question 'can AI's love?' (try 'Butterfly and Hellflower' by Eluki Bes Shahar or 'Fool's War' by Sarah Zettel). As you can see, this is one of my favorite subects to read about.


It's a good essay for the most part.

My only quibble is at the end:, when you say "I impose this question upon you: is humanity ready?"  Impose indicates that you're thrusting it on them, forcing them to answer. Maybe a better word would be "propose?" Perhaps that's what you meant?

As to whether or not we're ready for AI's yet, the answer is no. We can't even make up our collective mind about cloning non-human animal life (hello Dolly).  The question as to whether a non-human, free thinking organism could have a 'soul' would cause holy wars and would be messy and crazy. One of the books I recommend below (Butterfly and Hellflower) tells about a society that was so determined to wipe out AI's...and the AI's were so determined to protect themselves....that they literally started a war that nearly dropped the whole of humanity back into the dark ages and in which the very whisper of illegal tech is enough to make even the most hate-filled enemies bind together to viciously destroy the offender. I believe that this could happen, if it comes too quickly. Mankind has ever feared what it didn't understand.

There's a lot of good fiction out there dealing with this kind of thing. From darkly dystopian (try Do Androids Dream of Electric Sheep' By Phillip K Dick, the inspiration for the movie Blade Runner) to thought-provoking and pro-AI (try "The Moon is a Harsh Witness" by Robert Heinlein, a book that deals with an AI waking and helping to organize a rebellion on the moon), or even asking the very question 'can AI's love?' (try 'Butterfly and Hellflower' by Eluki Bes Shahar or 'Fool's War' by Sarah Zettel). As you can see, this is one of my favorite subects to read about.


Trusted Member
The Domonator
Like a SIR


Affected by 'Laziness Syndrome'

Registered: 11-20-12
Location: ...oh, just around.
Last Post: 3353 days
Last Active: 2201 days

01-27-13 07:14 AM
thenumberone is Offline
| ID: 731532 | 155 Words

thenumberone
Level: 143


POSTS: 5040/6365
POST EXP: 365694
LVL EXP: 35086786
CP: 4946.4
VIZ: 329756

Likes: 0  Dislikes: 0
My main objection is based on disagreement over A.I.
As a great chunk op academics argue, if we have programed a computer to act human, its not of its own will, and the conclusions it reaches are implanted by us.
As it currently stands, computers are vfi.
Very fast idiots.
Theyre modelled off our brain, but we dont actualy understand fully how our brain works. Try to get a computer to write poetry, judge art, or anticipate human behaviour, it falls flat.
The forefront of progress are self learning robots. Even then, it is only by tying weights to certain paramaters that we can make it learn correctly. A true A.I would modify these weights itself, as humans do as we learn.
Equally I think we should pursue these avenues, or else we are falling short of all we can do. There will always be ethical questions, we need to answer them, not avoid them.
My main objection is based on disagreement over A.I.
As a great chunk op academics argue, if we have programed a computer to act human, its not of its own will, and the conclusions it reaches are implanted by us.
As it currently stands, computers are vfi.
Very fast idiots.
Theyre modelled off our brain, but we dont actualy understand fully how our brain works. Try to get a computer to write poetry, judge art, or anticipate human behaviour, it falls flat.
The forefront of progress are self learning robots. Even then, it is only by tying weights to certain paramaters that we can make it learn correctly. A true A.I would modify these weights itself, as humans do as we learn.
Equally I think we should pursue these avenues, or else we are falling short of all we can do. There will always be ethical questions, we need to answer them, not avoid them.
Vizzed Elite
Bleeding Heart Liberal


Affected by 'Laziness Syndrome'

Registered: 03-22-11
Last Post: 3400 days
Last Active: 3400 days

Links

Page Comments


This page has no comments

Adblocker detected!

Vizzed.com is very expensive to keep alive! The Ads pay for the servers.

Vizzed has 3 TB worth of games and 1 TB worth of music.  This site is free to use but the ads barely pay for the monthly server fees.  If too many more people use ad block, the site cannot survive.

We prioritize the community over the site profits.  This is why we avoid using annoying (but high paying) ads like most other sites which include popups, obnoxious sounds and animations, malware, and other forms of intrusiveness.  We'll do our part to never resort to these types of ads, please do your part by helping support this site by adding Vizzed.com to your ad blocking whitelist.

×