

 |
| ChatGTP doesn't know how to spell (Page 2/2) |
|
TheDigitalAlchemist
|
JUN 30, 01:43 AM
|
|
| quote | Originally posted by CoolBlue87GT:
ChatGPT reminds me of talking with HAL |
|
cross between HAL and GlaDOS...
I miss the previous GEN of AI...
[This message has been edited by TheDigitalAlchemist (edited 06-30-2024).]
|
|
|
maryjane
|
JUN 30, 11:41 AM
|
|
| quote | Originally posted by TheDigitalAlchemist:
Funny thing is , when it says there ate 3, I ask it 'Are you SURE"? it usually says "My apologies" and reverts back to 1 or 2...
I don't like how polite it is... |
|
Many a hangman has apologized to the condemned just before he sent them thru the trap door with the rope around their neck.
|
|
|
Cliff Pennock
|
JUL 01, 08:53 AM
|
|
Many people misunderstand how Large Language Models (LLMs) work. They often assume these models function based on "knowledge" and "skill," but this is not entirely accurate. While LLMs can reproduce specific information from their training data, they do not have knowledge or skill in the human sense. They cannot memorize information or perform calculations with exact precision (unless they were trained for it). Instead, they operate through what can be thought of as "pattern recognition" or "probabilistic generation". Think of it as a brain that lacks the ability to store memories.
Consider this: What if I ask you verbally how many 'r's are in <insert word with many 'r's here>, but you need to answer quickly without counting. This is where intuitive intelligence comes into play. You make a guess without deliberate thought. You make an "educated" guess.
Humans can provide the correct answer when given time because we can verify our initial thoughts using stored knowledge. We can recall the word from memory and count the 'r's.
Moreover, ChatGPT doesn't "see" your words in the way humans do. When you input the sentence "how many 'r's in strawberry," it gets converted into a digital representation (through processes like tokenization and embedding), which are then processed by the neural network. If I ask you, "how many 'r's in brukdrra-ksslkjwerrrs;rkskdr;sd," the only way for you to answer correctly is to examine the word and count the 'r's one by one because you don't have that word stored in your memory. But what if you couldn't actually "see" that word? What if that sentence is first converted to something else? What if I had asked that question verbally? Then there's no way for you to be sure how many 'r's are in that word. And that is the "problem" with LLMs.
We often rely on intuitive processes ourselves. It's why we can function efficiently, though not without errors. This is how we drive cars and perform routine tasks without conscious thought. When we concentrate, we reduce errors but become less efficient, as concentration requires significantly more energy and time to complete tasks.
|
|
|
TheDigitalAlchemist
|
JUL 01, 02:03 PM
|
|
| quote | Originally posted by Cliff Pennock:
Many people misunderstand how Large Language Models (LLMs) work. They often assume these models function based on "knowledge" and "skill," but this is not entirely accurate. While LLMs can reproduce specific information from their training data, they do not have knowledge or skill in the human sense. |
|
That's why they aren't likely to suddenly evolve into AGI merely by "increasing Compute"
|
|
|
Cliff Pennock
|
JUL 01, 02:08 PM
|
|
| quote | Originally posted by TheDigitalAlchemist:
That's why they aren't likely to suddenly evolve into AGI merely by "increasing Compute"  |
|
The AI companies know that. That's why they are currently focusing on "memory". That will achieve AGI.
|
|

 |
|