Hello Ken, I've been reading your Newsletter since I took a course.
Regarding developing AI theory course, two axioms my professor shared long ago have guided me well.
1. The key word in artificial intelligence is artificial. It's only as smart as the people and experience that programmed it. This is very interesting in today's world, because how do you get your data to train on? What constitutes internet dreck (most things), okay, good, and great writing? If you only go with great, you don't have enough writing to train with... A key limitation I have found in my own work is that AI isn't trained to know the difference between Framework v2 and Framework v3, so it gives mixed up answers. (Notably, it doesn't have the expertise because it wasn't trained with the version in mind, because most stack overflow posts and other things don't explicitly call that out...) Consider writing a novel, if you trained only on (bad) fan fiction, will you get (bad) fan fiction when you use AI to write?
2. AI is ultimately all matrix math. That is a bit of an oversimplification perhaps, but a very good guiding principle to remember what is going on in the background. "Neural network" is a buzz word that represents a certain approach to machine learning. It's still machine learning and reduces to matrix math. And of course, that's it's power. Computers are very good at doing lots of math quickly.
I've been out of the loop for 10 years, so I'm not aware of all the fancy new stuff they are doing with natural language. But perhaps at the very least #2 serves to understand the history.
All good points. I remember when we used to say that neural networks were just a form of nonlinear statistics, too. If I can get across that an LLM is mostly just a pattern-matching algorithm with some probability thrown in, I think they'll understand its limitations better. We'll see.
Yeah, I saw that, though I'm not sure about it. He's a controversial figure, so I was waiting for some confirmation
Hello Ken, I've been reading your Newsletter since I took a course.
Regarding developing AI theory course, two axioms my professor shared long ago have guided me well.
1. The key word in artificial intelligence is artificial. It's only as smart as the people and experience that programmed it. This is very interesting in today's world, because how do you get your data to train on? What constitutes internet dreck (most things), okay, good, and great writing? If you only go with great, you don't have enough writing to train with... A key limitation I have found in my own work is that AI isn't trained to know the difference between Framework v2 and Framework v3, so it gives mixed up answers. (Notably, it doesn't have the expertise because it wasn't trained with the version in mind, because most stack overflow posts and other things don't explicitly call that out...) Consider writing a novel, if you trained only on (bad) fan fiction, will you get (bad) fan fiction when you use AI to write?
2. AI is ultimately all matrix math. That is a bit of an oversimplification perhaps, but a very good guiding principle to remember what is going on in the background. "Neural network" is a buzz word that represents a certain approach to machine learning. It's still machine learning and reduces to matrix math. And of course, that's it's power. Computers are very good at doing lots of math quickly.
I've been out of the loop for 10 years, so I'm not aware of all the fancy new stuff they are doing with natural language. But perhaps at the very least #2 serves to understand the history.
All good points. I remember when we used to say that neural networks were just a form of nonlinear statistics, too. If I can get across that an LLM is mostly just a pattern-matching algorithm with some probability thrown in, I think they'll understand its limitations better. We'll see.
As for people with money doing good, you might want to know about Jeff Atwood's most recent post: https://blog.codinghorror.com/stay-gold-america/.