Automotive

Why AI can’t drive your car: It lies about basic math

Why AI can’t drive your car: It lies about basic math
AI is not very good at the kind of information processing that driving requires
AI is not very good at the kind of information processing that driving requires
View 1 Image
AI is not very good at the kind of information processing that driving requires
1/1
AI is not very good at the kind of information processing that driving requires

I first learned to drive when I was around 10 years old and had a driver’s license by the time I was 16. AI is now about 73 years old and still can’t drive. What gives?

There are a lot of gadgetry-related reasons for AI not being able to drive a car.

Sensors aren’t good enough in all situations, the required processing power is more than a car can provide, processing times can be slower than a human brain (even with a supercomputer), AI isn’t capable of making ethical decisions, etc. There are lots of technological roadblocks stopping artificially intelligent cars from getting on the road.

But the biggest hurdle isn’t sensors or tech or even legal questions or ethics. It might be the fact that we don’t actually know how AI thinks.

A recent study by Anthropic found that AI chatbots can hallucinate, aren’t really good at doing basic math, and often lie about their own reasoning. Just like humans. It turns out, we know a whole lot of nothing about how brains work, AI or human.

As with the human brain, there are a lot of theories about how AI functions. “We think it does this because of that” sort of explanations. Like most of science, it’s just a best guess.

This can be put to the test pretty simply. I input “57+92” into ChatGPT. It responded with “57+92=149.” Which is correct. But when asked “How did you get that answer?”, the AI gave me the standard textbook “I added the ones and then the tens and combined the results.”

On that, ChatGPT lied.

That is not how the AI comes up with the answer. AI is a computer and like any advanced calculator, it just shoves the 1’s and 0’s together and gives a total. Like so:

111001 (57)
+
1011100 (92)
=
10010101 (149)

I know this because I have a Computer Science degree from 1994. I was there, Frodo, 2,000 years ago ... when we had to learn binary and ASCII, and when BASICA came with the operating system. AI is a computer and this is still how it does math. I caught ChatGPT in this lie when I asked it how it really does math. It gave me a short explanation of numbers to binary to result. Similar to what I illustrated above.

Things have changed a lot since 1994, but not enough to make AI equal to the average human driver. Back when I was handed an already-outdated degree, vehicles in the US had two federally mandated sensors. Both were for emissions. The rest of the vehicle’s pre-crash safety systems relied on the driver’s eyeballs and reflexes.

Back to the present, the people at Anthropic are testing a language model AI called Claude, which is similar to what many of us likely interface with daily via natural language: ChatGPT, OpenAI, and so forth.

Claude, they found, will often create logic to fit a preconceived narrative in order to appease or appeal to the person it is interacting with. Similar to the way politicians smarm around a subject to appear to agree with someone. But with less creepiness and ill intent.

Translate that to the car on the road. While driving, we make thousands of decisions every minute that could literally affect not only our lives, but the lives of those around us. We know from distracted driving data that even a few moments of inattention can lead to horrific results. Now consider what happens with AI either self-justifying a bad response or being unable to make a decision fast enough. Never mind the question of who is to blame for the result.

The point is that while the technology informing AI needs a lot of work to get to the same level as a human driver, the learning model itself is going to need to catch up too. AI is now in its 70s, but can only operate at about the same cognitive level as a small child. And we’re not even sure how it does that.

Today’s vehicles have dozens of sensors, hundreds of feet of wiring, three or more on-board computers, wireless and Wi-Fi connectivity, GPS, and a lot of other gadgets. None of those can drive the car for you. Nope, not even those models that have the “T” on them. They require humans to drive them and probably will for some time. Most truly autonomous vehicles are “gated” into an area for both legal and technical reasons.

We humans have a distinct advantage over our AI creations: we are born with pretty top-shelf sensory systems that work in multimodal formats – with many millennia of evolution that’s made us pretty good at utilizing them. We, like most animals, excel at collating multiple inputs and reacting accordingly.

As sensory technology catches up, AI may yet prove itself a superior driver. But we can’t say that for sure, since AI doesn’t think the way we do. AI isn’t as good at multi-sensory input cognition and learning. That's one of nature’s greatest feats but so far, we haven’t readily replicated it for machines. AI has also not proven itself very good at making snap decisions with little data to go on. It’s not intuitive. We make decisions intuitively quite often. Many times without much conscious thought. It’s a big part of what drives us to not only innovate, but to adapt and overcome.

The good news for self-driving vehicle fans, though, is that because we don’t understand AI’s thinking all that well, it could do things we aren’t expecting. It could become intuitive and capable of handling multiple inputs simultaneously. It’s safe to say that were it to do so, it would likely do it in a much shorter time than we humans did.

Let's hope that we’ve figured out how to make it honest by then, at least. I’m not interested in AIs arguing with one another over who's at fault. Insurance lawyers are bad enough.

10 comments
10 comments
TBA
LLMs do not do arithmetic in binary.
BradK
Interesting piece but I'm curious why in an article about AI/self-driving cars you didn't mention Waymo at all - particularly the massive amount of data coming out of their deployment. In less then two years and millions of trips they have been deployed in over 3 US cities (SF, LA, Phoenix) and by all benchmarks are much, much safer than human drivers. Having ridden in one several times at this point, i have never felt unsafe, and greatly prefer it to a human driver. You say "AI cant drive" but it already is! I for one welcome our new robotaxi chauffuer-overloards :)
Phoenix
Aaron, this is incredibly off-base. The *idea behind* how AI works is 70+ years old; actual development is more like 7. You only talk about LLMs, but those have almost zero usage in Tesla's self driving software. ChatGPT does not use binary addition to perform math unless you give it a calculator tool. My car handles my 20-minute backroads-highway-city commute with absolutely zero input about 40% of the time, and with only nudges about 70%.
I'm trying to be civil but this is bordering on misinformation.
Malatrope
Honestly, this is a very ignorant article. It would take a full article to explain why, in detail, but I'll leave that as an exercise for the students who know about this subject.
Aaron MacTurpen
@Phoenix Ask ChatGPT how it really does math. Specifically ask if it's doing it in binary. It will admit that it does. It's right there in the article.
@BradK Again, also covered in the article. Those self-driving cars are gated to a specific area.
ANTIcarrot
"I first learned to drive when I was around 10 years old and had a driver’s license by the time I was 16. AI is now about 73 years old and still can’t drive. What gives?" If AI is 73 years old, then 'you' learned to drive when your were somewhere north of 3,000,000,000 years old, not 10.
What was it you were saying about AIs being bad at math, and often lying about it?
Stephane Savanah
Thank about it. It's not like the LLM can call up a subroutine or access a calculator function. It uses tokens. Give ChatGPT the right prompt and you will get a more accurate answer: "So while everything underneath is ultimately built on top of binary operations (because I run on hardware that does that), I myself don't “think” in binary, or execute code in the way a CPU would. I'm not consciously stepping through binary addition or recursion or function calls. Instead: I see your prompt. I break it into tokens. I predict the next most likely tokens — the answer — based on context and patterns."
Bob Stuart
We have a "gated environment" that self-driving should focus on first- the fast lane on expressways. We should be able to signal that we want to use it, and then get slotted into the first available gap in a "train" of vehicles that are almost touching. Then we would just punch in our exit number and relax in high-speed safety, while picking up a charge from the guard rail if we want to.
martinwinlow
Aaron, you are so deluded on this, it takes my breath away. I can only conclude you live in a cave or something without access to the outside world and have written this piece on paper and had someone else post it on the, you know, ‘internet’. Even if you had never heard of Tesla’s FSD before, a 30 second search on YouTube for ‘self-driving car’ would demonstrate just how utterly wrong you are.
UncleToad
The author says he got his driving licence at 16. Well excuse me, but the best he'd be able to get is a moped licence then. He could apply for a provisional licence, but wouldn't be able to take the driving test until he is min. 17 years old. And at that age the insurance would horrendous anyway - especially if he plans to drive around London.