Pennock's Fiero Forum
  Totally O/T
  Artificial Intelligence: Are We Too Late? (Page 2)

Post New Topic  Post A Reply
Email This Page to Someone! | Printable Version

This topic is 2 pages long:  1   2 
Previous Page | Next Page
next newest topic | next oldest topic
Artificial Intelligence: Are We Too Late? by Cliff Pennock
Started on: 07-18-2023 12:15 PM
Replies: 50 (769 views)
Last post by: 1985 Fiero GT on 09-02-2023 11:33 AM
Cliff Pennock
Administrator
Posts: 11899
From: Zandvoort, The Netherlands
Registered: Jan 99


Feedback score: (2)
Leave feedback





Total ratings: 696
Rate this member

Report this Post08-23-2023 05:10 PM Click Here to See the Profile for Cliff PennockClick Here to visit Cliff Pennock's HomePageSend a Private Message to Cliff PennockEdit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by 82-T/A [At Work]:

Again, respectfully, this is literally what I do for a living. I manage a team of very highly paid AI researchers as a Principal Investigator for a research organization. Many members of my team have PhDs in Math and Computer Science (or are in the process of getting them), with a few only having a masters in machine learning. Nearly half have patents and IEEE papers on this. How it works is absolutely understood. This is a principle in AI called "explainability," and that's a measurement of how to interpret what the math is doing.


I'm on the fence about this since I'm neither an AI nor a math scientist. If you say it's fully explainable, then I wonder why many AI scientists say we do not fully understand how [the current AI] works the way it does. Sam Bowman is just one of them. But there are many, many more. And most are highly regarded AI scientists. I can remember watching a Sam Altman interview where he pretty much stated that they understand the "why", but not the "how". At a certain point AI's behavior is no longer expected nor predictable. Beyond a certain amount of parameters, things happen they did not expect nor are able to really explain.

Sure, simple AI can be explained quite easily. Heck, even I experimented with (small) neural networks in the 80s. And even though the results were pretty mind blowing, we were fully able to explain why it worked the way it did. But the latest iteration of GPT for instance, is doing things they can't really explain.
IP: Logged
82-T/A [At Work]
Member
Posts: 25863
From: Florida USA
Registered: Aug 2002


Feedback score: (1)
Leave feedback





Total ratings: 200
Rate this member

Report this Post08-23-2023 05:53 PM Click Here to See the Profile for 82-T/A [At Work]Send a Private Message to 82-T/A [At Work]Edit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by Cliff Pennock:

I'm on the fence about this since I'm neither an AI nor a math scientist. If you say it's fully explainable, then I wonder why many AI scientists say we do not fully understand how [the current AI] works the way it does. Sam Bowman is just one of them. But there are many, many more. And most are highly regarded AI scientists. I can remember watching a Sam Altman interview where he pretty much stated that they understand the "why", but not the "how". At a certain point AI's behavior is no longer expected nor predictable. Beyond a certain amount of parameters, things happen they did not expect nor are able to really explain.

Sure, simple AI can be explained quite easily. Heck, even I experimented with (small) neural networks in the 80s. And even though the results were pretty mind blowing, we were fully able to explain why it worked the way it did. But the latest iteration of GPT for instance, is doing things they can't really explain.



I'll read up on Sam Bowman, but I think the first thing that needs to be common ground is what exactly constitutes an AI Scientist. If it's someone that essentially has a computer science background, and works in the industry using tools that others have developed... e.g., knows how to use ChatGPT (or ComputerVision, or whatever) from the aspect of simply using an API... then I don't think that individual would reasonably be a person who could explain AI.

The people who use ChatGPT may be in awe of it, but the people who developed it absolutely know what it's doing. I think it's important to understand that what makes AI do what it does, is math... it's straight math. There are only a few mathematicians... maybe 100 in the entire country that fully understand how AI works, because they're the ones that have developed the algorithms that everyone else is using. I work with 2 of those individuals... out of my team of 20. The rest reasonably understand everything, but largely have to go to the other two for the very advanced concepts (e.g., designing our own algorithms to do other / specific things).

I echo what bDub has said... my opinion... I think the problem with AI is that it can potentially be dangerous when it's used in a situation where you'd otherwise expect someone to use common sense. A good example (that someone told me) is when you use AI to design a new aerodynamic airplane. If you are not 100% clear on all the variables and checks and balances, it will basically give you a design that looks like a piece of paper... 0 coefficient of drag. And so you can apply that logic to everything from using Reinforcement Learning to gamify nuclear war (enemy A has warheads here and here, and it takes this amount of time to reach point B, so I will launch these warheads first so they'll strike before others can strike). That becomes a problem when there's no human in the middle.

So to be fair, we do really need to be concerned about when and where things are being used.

I provide a more applicable example to IRL as the kids say. There's a state in the United States which is using a machine learning tool that provides sentencing terms to a judge for criminals. What this means is, when a jury finds a criminal guilty of a crime, this software then makes the determination based on a number of factors (age, priors, etc.) and determines the likelihood of that criminal getting out after a certain amount of time and committing another offense. It then spits out a certain number of years that the judge can then use. There are judges who are using this. There was a case that went all the way to that state's Supreme Court (not the Federal US Supreme Court), and it was UPHELD... which is absolutely insane to me. The idea that a computer program is making a determination of how many years a person should go to jail is ... just beyond me. It's literally inhuman. This means that no matter the efforts of the defense lawyer, or please from the defendant, the AI is making the decision. Of course, the judge can choose to disregard these recommendations, but this has literally been a thing. If anyone is really interested, I can look up the exact case that was appealed (and the decision was upheld).

But yeah... people need to be smart about these things and not be too quick to immediately hand-off any decision making processes that require "reason," because AI cannot reason for **** .

[This message has been edited by 82-T/A [At Work] (edited 08-23-2023).]

IP: Logged
Cliff Pennock
Administrator
Posts: 11899
From: Zandvoort, The Netherlands
Registered: Jan 99


Feedback score: (2)
Leave feedback





Total ratings: 696
Rate this member

Report this Post08-24-2023 08:45 AM Click Here to See the Profile for Cliff PennockClick Here to visit Cliff Pennock's HomePageSend a Private Message to Cliff PennockEdit/Delete MessageReply w/QuoteDirect Link to This Post
Sam Bowman, professor at NYU, tehnical staff Anthropic, AI scientist: "We built it, we trained it, but we don't know what it's doing."
Stuart Russel, professor at Berkeley, computer scientist, AI researcher, : "We don't understand how these systems work."
Sundar Pichai, CEO Google: "We don't fully understand how AI works."

These are just a few of the people who claim we do not fully understand how AI works. And all (perhaps with the exception of Sundar Pichai, but he is talking on behalf of his AI department) are pretty much "AI Scientists". Again, I'm not saying I don't believe you but are you saying that these people are lying and/or wrong?

As for AI lacking reasoning abilities, this is what Sam Altman (CEO OpenAI) has to say about that:

 
quote


The right way to think of the models that we create is a reasoning engine, not a fact database. They can also act as a fact database, but that's not really what's special about them – what we want them to do is something closer to the ability to reason, not to memorize.


And a couple of articles about our knowledge of the inner workings of AI:

Why humans will never understand AI
We will never fully understand how AI works
Even the scientists who build AI can’t tell you how it works.
We do not understand how these systems work
IP: Logged
82-T/A [At Work]
Member
Posts: 25863
From: Florida USA
Registered: Aug 2002


Feedback score: (1)
Leave feedback





Total ratings: 200
Rate this member

Report this Post08-24-2023 09:16 AM Click Here to See the Profile for 82-T/A [At Work]Send a Private Message to 82-T/A [At Work]Edit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by Cliff Pennock:

Sam Bowman, professor at NYU, tehnical staff Anthropic, AI scientist: "We built it, we trained it, but we don't know what it's doing."
Stuart Russel, professor at Berkeley, computer scientist, AI researcher, : "We don't understand how these systems work."
Sundar Pichai, CEO Google: "We don't fully understand how AI works."

These are just a few of the people who claim we do not fully understand how AI works. And all (perhaps with the exception of Sundar Pichai, but he is talking on behalf of his AI department) are pretty much "AI Scientists". Again, I'm not saying I don't believe you but are you saying that these people are lying and/or wrong?

As for AI lacking reasoning abilities, this is what Sam Altman (CEO OpenAI) has to say about that:


And a couple of articles about our knowledge of the inner workings of AI:

Why humans will never understand AI
We will never fully understand how AI works
Even the scientists who build AI can’t tell you how it works.
We do not understand how these systems work



See, this is what kind of confuses me here (I promise I will read about this guy), but all the AI Researchers and mathematicians involved in AI that I've worked with, across the intelligence community, and throughout silicon valley, they all will tell you that AI cannot reason. Specifically, the example of AI making things up and / or lacking logic that a human would otherwise know as "common sense."

My honest opinion, it sounds to me like this guy Sam is using his position to be dramatic and get his name out there. "AI" has been around for ~50 years, because most of the math that exists, whether that's bayesian theory, distributions, encoders, classifiers, etc... all of that has existed long before. The difference is that we just never had the processing power to do anything with it. What's changed though is that we've gone through 2 or 3 "AI Winters," which is essentially where there are huge leaps in AI that result in a lot of fanfare, and then people get burnt out from it. Immediately following, there's a huge divestment from industry. During those highs though, people have largely warned people not to promise too much... as in, not make statements about what AI can and cannot do, because when it doesn't do what it should... people eventually lose interest and investment dries up.

As an example, an entry-level AI researcher makes about $180k. An average AI researcher makes about $250k. And the big companies like Amazon, Facebook, etc... they're hiring AI researchers / engineers and paying upwards of $500k for some dude just hand-jamming a laptop with math. These people are basically unicorns, and they require insane financial investment to hire these people, not to mention the hardware costs that are required (either S3 buckets from AWS, or physical hardware that the company purchases). So a company reluctantly hires these people to stay ahead of the curve and show that they are with the in crowd (or even advancing it by using known algorithms for new applications). But if AI loses steam, these will be some of the first places to be cut due to the outrageous costs associated with these projects.

Anyway, I'll read about this guy this weekend and let you know my thoughts...
IP: Logged
Cliff Pennock
Administrator
Posts: 11899
From: Zandvoort, The Netherlands
Registered: Jan 99


Feedback score: (2)
Leave feedback





Total ratings: 696
Rate this member

Report this Post08-24-2023 10:04 AM Click Here to See the Profile for Cliff PennockClick Here to visit Cliff Pennock's HomePageSend a Private Message to Cliff PennockEdit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by 82-T/A [At Work]:

they all will tell you that AI cannot reason. Specifically, the example of AI making things up and / or lacking logic that a human would otherwise know as "common sense."


"Common sense" originates from accumulated knowledge rather than pure reasoning, as I detailed in an explanation I provided here). All outputs generated by AI are products of "reasoning" rather than from direct factual references. AI hallucination occurs when it follows neural pathways that are relatively weak. However, AI has no clue what is 100% factual and what isn't.

Detecting AI-generated "hallucinations" relies on either possessing prior knowledge that contradicts the output or the ability to verify its accuracy. In other words, we need knowledge. For example, if you would ask ChatGPT about me, it will tell you a whole bunch of stuff that appears plausible. But if you wouldn't know me or you had never heard of me, you would not be able to differentiate between truth and fabrication. It requires "knowledge" to know that rather than mere common sense.

AI's operation isn't rooted in a progression from stating established facts to inventing content when facts are absent. It in fact makes up stuff from the outset. However, the initial outputs are more likely to align with actual facts due to the neural pathways they emerge from. Consequently, statements AI makes about me could conceivably be accurate. If common sense was all we could use judging its response, then there would be no reason to label it as false. If it would say that being the creator of the world's largest Fiero site enabled me to become the first man to set foot on the moon, only then clear contradictions with common sense emerges. But that's now what happens. Not at all. I have never seen an answer from ChatGPT or Bing (GPT4) or any other LLM AI that could be labeled illogical. AI answers might be incorrect after scrutiny but rarely illogical.

And yes, we fully understand how the AI algorithms work. Even I have some basic understanding. Like I said, I too experimented with neural networks way back when. However, certain AI capabilities remain a mystery, as they weren't part of the model's explicit training, and their origins can't be logically deduced.

The articles I posted about our lack of knowledge concerning the full workings of AI, are all articles from the recent past - the past few months or so. But if you go further back, there are many, many more but I haven't included those since our knowledge of AI has grown quite a bit since then.

I also find it hard to believe these people are merely seeking attention, trying to get their 15 minutes of fame. Most of them aren't exactly "new guys" in the world of AI. And Sundar Pichai (Google's CEO) has no incentive for lying about this. It would be very much the opposite really. Economically, it would make much more sense for him to say they fully understand what it is they are creating. There is no benefit for Google at all fear mongering this.
IP: Logged
theBDub
Member
Posts: 9720
From: Dallas,TX
Registered: May 2010


Feedback score: N/A
Leave feedback





Total ratings: 154
Rate this member

Report this Post08-24-2023 04:54 PM Click Here to See the Profile for theBDubSend a Private Message to theBDubEdit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by Cliff Pennock:


"Common sense" originates from accumulated knowledge rather than pure reasoning, as I detailed in an explanation I provided here). All outputs generated by AI are products of "reasoning" rather than from direct factual references. AI hallucination occurs when it follows neural pathways that are relatively weak. However, AI has no clue what is 100% factual and what isn't.

Detecting AI-generated "hallucinations" relies on either possessing prior knowledge that contradicts the output or the ability to verify its accuracy. In other words, we need knowledge. For example, if you would ask ChatGPT about me, it will tell you a whole bunch of stuff that appears plausible. But if you wouldn't know me or you had never heard of me, you would not be able to differentiate between truth and fabrication. It requires "knowledge" to know that rather than mere common sense.

AI's operation isn't rooted in a progression from stating established facts to inventing content when facts are absent. It in fact makes up stuff from the outset. However, the initial outputs are more likely to align with actual facts due to the neural pathways they emerge from. Consequently, statements AI makes about me could conceivably be accurate. If common sense was all we could use judging its response, then there would be no reason to label it as false. If it would say that being the creator of the world's largest Fiero site enabled me to become the first man to set foot on the moon, only then clear contradictions with common sense emerges. But that's now what happens. Not at all. I have never seen an answer from ChatGPT or Bing (GPT4) or any other LLM AI that could be labeled illogical. AI answers might be incorrect after scrutiny but rarely illogical.

And yes, we fully understand how the AI algorithms work. Even I have some basic understanding. Like I said, I too experimented with neural networks way back when. However, certain AI capabilities remain a mystery, as they weren't part of the model's explicit training, and their origins can't be logically deduced.

The articles I posted about our lack of knowledge concerning the full workings of AI, are all articles from the recent past - the past few months or so. But if you go further back, there are many, many more but I haven't included those since our knowledge of AI has grown quite a bit since then.

I also find it hard to believe these people are merely seeking attention, trying to get their 15 minutes of fame. Most of them aren't exactly "new guys" in the world of AI. And Sundar Pichai (Google's CEO) has no incentive for lying about this. It would be very much the opposite really. Economically, it would make much more sense for him to say they fully understand what it is they are creating. There is no benefit for Google at all fear mongering this.


I think you’re applying too many human concepts to an AI model. AI doesn’t even “know” language. It learns what characters are often near other characters, and what set of characters are often near other sets of characters. It doesn’t explicitly “know” what it’s typing. Literally.

When you query the model, it will do a fast and loose filter on its knowledge base to get a set for further queries. Then it’ll do a deeper review of the information in that knowledge base to find the most relevant information. For complex models, there may be more than just the two steps, there may be more filters built in to find the most relevant information most efficiently. It uses that to form its answer, but a conversational LLM also transcribes to something the user can understand.

At any one of those steps, it may go down the wrong rabbit hole. It may be using some random Reddit threads as its knowledge base. It may be collating information from Pennock’s and webMD and CNN and they’re all conflicting. The AI doesn’t even “know” they’re conflicting. It just uses that as a knowledge base for a response.

So when it forms a response, it can be wrong. Way wrong. And you can tell it it’s wrong, but it may have taken the wrong turn like 4 filters ago, so it will argue that it’s not wrong.

This is simplified, but this is all known and understood. It’s not actually human reasoning in a machine.
IP: Logged
Cliff Pennock
Administrator
Posts: 11899
From: Zandvoort, The Netherlands
Registered: Jan 99


Feedback score: (2)
Leave feedback





Total ratings: 696
Rate this member

Report this Post08-25-2023 04:06 AM Click Here to See the Profile for Cliff PennockClick Here to visit Cliff Pennock's HomePageSend a Private Message to Cliff PennockEdit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by theBDub:

At any one of those steps, it may go down the wrong rabbit hole. It may be using some random Reddit threads as its knowledge base.


That's just it. It doesn't have a knowledge base. It has no database of facts. None whatsoever. It can't use the wrong data since it has none. It follows neural pathways. In that respect, the neural "brain" of AI works a lot like a human brain. With the exception that we have long term memory from which we can draw "facts". And which we can use to scrutinize our own initial thoughts. These initial thoughts come from our "neural net". Most of the time we don't consult our long term memory either, which is the reason why we sometimes say stuff that - when we "think" about it - turns out to be incorrect.

People incorrectly think that AI remembers the data it is trained with. Which is incorrect. The training data is only used for strengthening neural pathways. If you tell it that "1 + 1 = 2", then ask how much 1 + 1 is, it might very well answer "purple". But if it was trained with 100s of internet pages that has told it that 1 + 1 = 2, but also with a single Reddit page that argues that 1 + 1 = 3, it would never answer 3. Because that page only resulted in a very weak neural pathway it will consequently never follow.
IP: Logged
Cliff Pennock
Administrator
Posts: 11899
From: Zandvoort, The Netherlands
Registered: Jan 99


Feedback score: (2)
Leave feedback





Total ratings: 696
Rate this member

Report this Post08-25-2023 05:08 AM Click Here to See the Profile for Cliff PennockClick Here to visit Cliff Pennock's HomePageSend a Private Message to Cliff PennockEdit/Delete MessageReply w/QuoteDirect Link to This Post

Cliff Pennock

11899 posts
Member since Jan 99
 
quote
Originally posted by theBDub:

It’s not actually human reasoning in a machine.


It might not be human reasoning, it is reasoning nonetheless. It is all AI does. And the fact that it's all done with algorithms, doesn't change the fact it's still reasoning.

The most simplest of proof that AI is able to reason, is its ability to solve problems it was never trained for. Its ability to solve problems that require reasoning. For example, Microsoft presented GPT-4 with a problem that required an intuitive understanding of the physical world. It was asked how to best stack certain items in a stable manner (I believe it was eggs, a laptop, a bottle and a few other things). The answer it gave was pretty mind blowing. Not for a human because a human could get to the same solution on knowledge alone, but for an AI that doesn't have that knowledge it was pretty impressive. Microsoft went as far as saying it displayed "human reasoning", a statement that perhaps was a little bit over enthousiastic.

So again, it might not be human reasoning, it is reasoning nonetheless.
IP: Logged
theBDub
Member
Posts: 9720
From: Dallas,TX
Registered: May 2010


Feedback score: N/A
Leave feedback





Total ratings: 154
Rate this member

Report this Post08-25-2023 12:08 PM Click Here to See the Profile for theBDubSend a Private Message to theBDubEdit/Delete MessageReply w/QuoteDirect Link to This Post
 
quote
Originally posted by Cliff Pennock:


That's just it. It doesn't have a knowledge base.


A knowledge base is more of a concept of how AI looks through information - I'm not speaking about an authoritative set of facts. It's a term used by AI teams (I have an AI team in my data office)
IP: Logged
MidEngineManiac
Member
Posts: 29566
From: Some unacceptable view
Registered: Feb 2007


Feedback score: N/A
Leave feedback





Total ratings: 297
User Banned

Report this Post08-29-2023 09:26 PM Click Here to See the Profile for MidEngineManiacSend a Private Message to MidEngineManiacEdit/Delete MessageReply w/QuoteDirect Link to This Post
IP: Logged
1985 Fiero GT
Member
Posts: 1210
From: New Brunswick, Canada
Registered: May 2023


Feedback score: N/A
Leave feedback

Rate this member

Report this Post09-02-2023 11:33 AM Click Here to See the Profile for 1985 Fiero GTSend a Private Message to 1985 Fiero GTEdit/Delete MessageReply w/QuoteDirect Link to This Post
I don't know how many other people have watched the series Person of Interest, it has 5 seasons, so it is a pretty long binge watch, but well worth it, and very fascinating, as it follows the creation of an AI to find terrorists before they actually terrorise, ends up with a "good" ai, a closed system without human access, which doesn't allow humans to use it for bad purposes, and later on another ai that doesn't have the safeties that the good one has, which the government starts using for other purposes, and eventually the AIs are at war with each other, which results in the good AI emerging victorious.
IP: Logged
PFF
System Bot
Previous Page | Next Page

This topic is 2 pages long:  1   2 
next newest topic | next oldest topic

All times are ET (US)

Post New Topic  Post A Reply
Hop to:

Contact Us | Back To Main Page

Advertizing on PFF | Fiero Parts Vendors
PFF Merchandise | Fiero Gallery
Real-Time Chat | Fiero Related Auctions on eBay



Copyright (c) 1999, C. Pennock