Discussion: Artificial Intelligence

Becky

Webmistress
Joined
Mar 25, 2003
Messages
7,424
Reaction score
1,511
We live in a world that is becoming ever more dependant on computers, technology and gadgets. As a result of continued advancement, a topic that has been in the news a lot recently is Artificial Intelligence (AI). The potential for having machines that are capable of independent thought is very real, and several prominent experts have come forward with their thoughts on the matter:

Stephen Hawking - "The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which"

Elon Musk - “I think that the biggest risk is not that the AI will develop a will of its own,” Musk says, clarifying his unease, “but rather that it will follow the will of people that establish its utility function.”

An Open Letter on Artificial Intelligence, signed by various industry experts

There are also research companies such as:

The subject of AI raises many questions and opinions, so here are a few prompts to get you started:

  • How would you define AI? Is it simply a replication of what humans can do, or more than that?
  • How close is AI to beating the Turing Test? Is the Turing Test the best way to test AI?
  • What benefits could be achieved through strong AI? Could we cure cancer? Eradicate poverty? What else?
  • What would a future with AI look like?
  • What are the potential risks?
  • What is your favourite AI film and what issues does it raise? eg The Terminator, I Robot, Ex Machina, 2001: A Space Odyssey, The Matrix etc
  • How could it be controlled to protect humanity? What safeguards should be put in place, if any?
 
Personally I think that AI is a step too far and the human race would / could find it's self in trouble.:eek:
 
"How could it be controlled to protect humanity? What safeguards should be put in place, if any?"


The standard reply is always Asimov's Three Laws of Robotics which state



  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
All sounds very straightforward but even Asimov had to introduce another law, the 0th law, so named as he thought it should come before the first law. This states that


0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.


Of course, many SF novels revolve around instances when rogue robots break these laws. :D


The other big problem is how do you define a human being, a robot (both concrete, solid entities) and humanity (an abstract concept)?



Life is never simple. :D
 
Wow !!!!

I've just checked and there are 206 robots accessing PCR right now !!!!!!!

:eek::D
 
The standard reply is always Asimov's Three Laws of Robotics

We watched Automata the other night, and similar laws were imposed on the robots in that. The first law was that they cannot harm any form of life, and the second was that they cannot alter themselves or others. I thought that second law was quite clever - Asimov's laws don't account for the fact that the AI could simply change their programming/hardware as they see fit.

It's a really good film, I'd recommend it to anyone who hasn't seen it.

Another factor to throw into the mix - what are the ethics behind it all? If AI is self-aware, does it have a consciousness? How should we treat them?!
 
Asimov's laws don't account for the fact that the AI could simply change their programming/hardware as they see fit.

If they changed their programming they would still have to obey the laws. The basic laws cannot be overridden.

If AI is self-aware, does it have a consciousness? How should we treat them?!

This very matter is being explored in the TV series "Humans" which is in the middle of its run. Unfortunately I missed the first, 2014, series.
 
I wonder if in the future, if we have been able to develop AI, then combined with high-tech robotics we would effectively have 'human replacements'. Would this mean that a lot of people would be able to pursue hobbies whilst the robots do all the work...?

Maybe I'm just indulging in wishful thinking given that it's Monday morning! :lol:
 
There are a number of subjects that seem to have fascinated mankind for a long long time, robots are one of them, alien life forms and the existence of higher entities are others.

I suppose given time we'll be able to cram together enough logic gates into one space and mechanics/hydraulics will improve and we will have something that can ape a human bean fairly well.

Real thought though - just my opinion - can only be mimicked - and I think real reason, thought and perhaps even decision making can only be made by that grey matter that nestles between our ears.

The thought of a bunch of chips suddenly developing a real mind is endless fodder for science fiction though and long may it survive, Mr Flops do enjoy a good far-fetched tale or two.
 
Would this mean that a lot of people would be able to pursue hobbies whilst the robots do all the work...?

Great idea but how would we all earn a living if we didn't have jobs? :D

And, are we part way there already, with the number of robots already working? I assume robot maintenance doesn't come cheap too. :)
 
Great idea but how would we all earn a living if we didn't have jobs? :D

Your robot would earn your living for you! :D

That being said, I can't imagine a time where no-one works - it's unlikely to be 100%.

Anyone seen a film called Her?
 
I don't want a job I feel I have done my job for society and now enjoying my retirement and expecting the younger generation look after me:thumb::lol::dance::cheers::lol::bow::p:p
 
Back
Top