Artificial Intelligence: Are We Too Late? (Page 4/6)
fredtoast AUG 17, 05:22 PM
When AI takes over the world will all AI robots agree with each other, or will they split into factions and go to war.

They learned their "intelligence" for us, so I think they probably will
Raydar AUG 17, 10:13 PM
Interesting discussion. I'll just drop this here.



Driverless Car Gets Stuck in Wet Concrete in San Francisco
Valkrie9 AUG 21, 05:10 AM

Boat Show

Ai for you
What a wonderful and humanitarian effort, making the world a loving place.
Everyone may have an Ai cutie synth, like in some sci-fi movie.
Honey, I'm home !
Ah ! The future.

' I think I'll have a dozen spring rolls, with hot and spicy plum sauce ! '

82-T/A [At Work] AUG 21, 01:17 PM

quote
Originally posted by Cliff Pennock:

AI works, but nobody knows exactly how or why. Essentially, AI based on LLMs (Large Language Models) is nothing more than a collection of complex algorithms that can predict text. You can find a simple form of AI in your phone when, for example, you're typing a WhatsApp message. Your phone suggests possible next words as you type.





I respectfully disagree to an extent. When we say "nobody knows exactly how or why," this is not true. The people who have designed the LLM being used, are the ones who define the various weights and rules. There are things such as a knowledge-graph which builds inference between different things in context. At the end of the day though, LLMs like ChatGPT are only as good as the data that's fed into them. If you feed it garbage, or for example... social media, you're going to get the same kind of data out, or an amalgamation of such.

There are big problems with ChatGPT models though, and that's largely because they don't respect rule integrity. Like for example, when an LLM was used to help lawyers. It's so confidently wrong on things that it simply makes up. It creates new laws and new cases that never existed. Which gets to the next point...



quote
Originally posted by Cliff Pennock:

That's where people misunderstand AI's intelligence. It's not doing the same things we do but faster. It's actually doing things differently. Smarter. In ways we actually don't comprehend. And yes, much, much faster. ASI is expected to be able to come up with "solutions" in milliseconds, where the combined human race would perhaps take thousands of years. AI is literally thinking out of the box. Take the newest Chess AI. It is obliterating every Grand Master out there. And in ways that they have never seen before. Or have never thought of before. That actually make them think differently about chess.




AI can do well, what humans can do. It can't do things that's never been done before, because AI lacks reasoning.

Chess is an example of simply taking every possible combination of moves, determining which ones are more effective under the circumstances, and then applying them. This is all based on something called "Reinforcement Learning." It works on the concept of a sigmoid curve... the gains are significant in the beginning, but then as it learns all the possible combinations that could ever really exist... it tapers off.


There are concerns... and things need to be addressed. For example, a concept of reinforcement learning for nuclear weapon planning (e.g., Whopper from Wargames, which is a real thing... just not under that name), or using various forms of modality processing (like friendly / foe identification)... can be bad if there is no human in the middle. There does need to be an ethics and morals standardizaton for AI though...
Cliff Pennock AUG 21, 05:11 PM

quote
Originally posted by 82-T/A [At Work]:

I respectfully disagree to an extent. When we say "nobody knows exactly how or why," this is not true.



Actually, it is. AI is built purely on theory. And it works, but we really don't know how or why. For example, here is an interview with professor Sam Bowman of NYU.

It's like a cave man who has learnt that cooking his food makes it less likely that the food will make him sick - but he doesn't know why.


quote
AI can do well, what humans can do. It can't do things that's never been done before, because AI lacks reasoning.



And that's exactly the mistake people make when they think of AI. Because the only thing Ai can do is reason. It has no knowledge. None. It has no database of facts it relies on. That's the whole concept of neural networks.


quote
It's so confidently wrong on things that it simply makes up.



It has no concept of right or wrong because again, it has no knowledge. It follows neural pathways:


quote
Originally posted by Cliff Pennock:

Contrary to the belief that AI stores information, it does not have a database of facts to draw from. During its training, AI is exposed to vast amounts of data and facts, which reinforce existing neural pathways, much like how human memory functions. When posed with a question, AI follows these neural pathways and regurgitates any information it encounters along them. However, it lacks the equivalent of human "long-term memory." In our brains, when we reach a point along these pathways where we can no longer verify information from our long-term memory, we stop speculating or providing answers. AI, lacking long-term memory (at least for now), continues producing responses until the pathways become weak and illogical.




quote
Chess is an example of simply taking every possible combination of moves, determining which ones are more effective under the circumstances, and then applying them.



Nope. That is not how AI chess works. It does not try every possible combination. That's in fact how the older chess programs worked. They simply tried every possible move (and possible response moves) and gave it a score. The move with the highest score was the move it made. Not so in AI chess. In fact, AI is obliterating chess masters world wide with plays that makes them rethink chess. I read a very interesting article about that some time ago, but can't find it anymore.
maryjane AUG 21, 05:46 PM

quote
AI is obliterating chess masters world wide with plays that makes them rethink chess. I read a very interesting article about that some time ago, but can't find it anymore.


AI had it moved so no human could access it..
Wichita AUG 21, 07:35 PM
Valkrie9 AUG 23, 09:19 AM



82-T/A [At Work] AUG 23, 10:46 AM

quote
Originally posted by Cliff Pennock:
Actually, it is. AI is built purely on theory. And it works, but we really don't know how or why. For example, here is an interview with professor Sam Bowman of NYU.

It's like a cave man who has learnt that cooking his food makes it less likely that the food will make him sick - but he doesn't know why.Nope. That is not how AI chess works. It does not try every possible combination. That's in fact how the older chess programs worked. They simply tried every possible move (and possible response moves) and gave it a score. The move with the highest score was the move it made. Not so in AI chess. In fact, AI is obliterating chess masters world wide with plays that makes them rethink chess. I read a very interesting article about that some time ago, but can't find it anymore.




Again, respectfully, this is literally what I do for a living. I manage a team of very highly paid AI researchers as a Principal Investigator for a research organization. Many members of my team have PhDs in Math and Computer Science (or are in the process of getting them), with a few only having a masters in machine learning. Nearly half have patents and IEEE papers on this. How it works is absolutely understood. This is a principle in AI called "explainability," and that's a measurement of how to interpret what the math is doing. The neural pathways you talk about are derived using a series of weights and tokens that can be changed, modified, etc. The ability for the learning model to be successful is called its "F1 Score."

I highly recommend that anyone who's interested in this, take this course, "AI for Everyone."

https://www.coursera.org/learn/ai-for-everyone/ ...it's free on Coursera.


Again, this is an awesome article you posted, but I want to be clear, this isn't the kind of Sci-Fi that people think it is. It's just math... hard math, yes... but it's math. The problem though does still remain that when used improperly, it can cause a lot of problems. There absolutely could be a thermonuclear war because someone ****ed up a learning model... you would just have to know that when the terminators are coming at you, there's no emotions behind it... they're just doing what they've been programmed to do. AI is not sentient, it's just math.
theBDub AUG 23, 04:00 PM
No, we are not too late. AI is just what we have chosen to call a collection of methods we use to program machines to perform cognitive functions, even functions that the machine was not explicitly designed to perform. I respectfully completely disagree that "we don't know why it works," that's simply not true. Maybe no single person understands every row in the more complex algorithms, but we certainly know how they work.

I'm not remotely concerned about AI as a general tool. I'm a little concerned with how people use it (misusing information received from AI is so common because most people don't understand what AI is or how it works, let alone AI that is developed for purposes other than general public consumption) and how certain programs are trained (train an anomaly detection algorithm to detect threats based on historical data, and you may find your algorithm produces false flags based on characteristics like race and gender), but I'm not concerned about it generally existing and continuing to grow as a practice for the next few decades.

Honestly, I find most people that are afraid of AI as a general concept are misinformed as to what it is and how it works.