ted.neward@newardassociates.com | Blog: http://blogs.newardassociates.com | Github: tedneward | LinkedIn: tedneward
Some of you will not like this presentation
Some of you will not like this presentation
Some of you may even be triggered by my assertions
Some of you will not like this presentation
Some of you may even be triggered by my assertions
I am happy to argue with you...
Some of you will not like this presentation
Some of you may even be triggered by my assertions
I am happy to argue with you...
... but not until after the talk
No, you're not going to lose your job
... any more than you were before 2022
... at least, not to AI
https://sloanreview.mit.edu/article/the-hidden-costs-of-coding-with-generative-ai/
No, humans are not in existential danger
... at least, not from AI
https://www.thepittpulse.org/ai-worst-fear
No, AI is not revolutionary
we've been at this since 1950
complete with corresponding media hype
https://ai-2027.com/
“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”
https://www.historyofdatascience.com/dartmouth-summer-research-project-the-birth-of-artificial-intelligence/
No, we are not on the cusp of AGI
we have some tools/tactices/techs that can do some things well
and the rest, not at all
https://www.nytimes.com/2024/08/14/us/waymo-robotaxi-parking-lot-honking.html
https://www.businessinsider.com/tesla-optimus-robots-bartending-controlled-by-humans-2024-10
https://www.bloomberg.com/news/articles/2020-02-13/inside-the-firings-at-softbank-s-robot-pizza-startup
Yes, our profession is evolving
which it always does
and not always in the manner we expect
https://futurism.com/chatgpt-marriages-divorces
Of course, World War Two dominates the 40s
computers used as maths machines
calculating artillery tables and the like
Alan Turing; Bletchely Park and ENIGMA
Many technologies were advanced in the name of "the war"
airplanes and flight
plexiglass (which led to contact lenses)
atomic power
computers
"The Imitation Game" is an interesting parlor game
a conversation, masked, between two people
the player/contestant must guess if their partner is male or female
Turing posited his Test:
the "machine could think" if it could pass as human in the game
albeit, sitting at a terminal, not on TV
whether this is a good test of "thinking" is a hot debate topic
1956: Dartmouth Summer Research Project on Artificial Intelligence
John McCarthy coins the phrase "artificial intelligence"
1964: ELIZA creates appearance of human interaction
Joseph Weizenbaum
"the first chatbot"
Bell-bottoms and disco shoes
Nobody wants to remember those days
CLiPS is a Lisp-like expert system
facts declare atoms of data
rules indicate scenarios around inferrable outcomes
MYCIN is an early CLiPS-like medical expert system/database
early attempt to replace Dr House with software
first proposed in 1965 (fuzzy set theory by Zadeh)
software implementation systems emerge
particularly in control systems (e.g. trains, rice cookers, etc)
great success in Japan
but not much uptake in the rest of the software space
"boom" cycle ended in spectacular fashion in 2001... and again in 2008
Bayesian filters proposed as anti-spam tools
but never really marketed as "AI"
Cloud led to surge in compute availability
"data science" begat "machine learning"
image recognition started to surge, then faded
voice assistants (Siri, Alexa, etc) came to the fore, then faded
speech-to-text, text-to-speech
2022: ChatGPT is unveiled online in a Google-like UI
with predictable results
beginning the current hype cycle
LLMs (and SLMs) seemingly complete the natural language processing challenge
to make them more creative we added retrieval augmented generation
an(t)-thrə-pə-ˈmȯr-ˌfi-zəm n
the attribution of human characteristics or behaviour to a god, animal, or object.
an interpretation of what is not human or personal in terms of human or personal characteristics
is the attribution of human form, character, or attributes to non-human entities. It is considered to be an innate tendency of human psychology.
from the very first, humans sought to ascribe human qualities to inhuman things
oceans, seas, rivers
thunder and lightning
wine
we even do this today with our pets and other animals
Disney has made bajillions off of this
but we are starting to see dangerous forms of it emerging
"Married Japanese man 'finds love' with sex doll" - https://www.dailymail.co.uk/femail/article-3661804/Married-Japanese-man-claims-finally-love-sex-doll.html
"The rise of ChatGPT therapy and our constant need for feedback" - https://www.businessinsider.com/chatgpt-therapy-risks-benefits-boundaries-2025-3
AIs are now given human-like names or descriptors
Claude
Copilot
Gemini
... and those companies are more interested in legal protection than change
"in a recent cross-sectional study of 195 randomly drawn patient questions, a team of licensed health care professionals compared physicians and ChatGPT’s responses to patients’ questions. The chat bot responses were preferred over physician responses and rated significantly higher for both quality and empathy. Importantly, the proportion of responses rated empathetic or very empathetic was significantly higher for the chat bot than for physicians, amounting to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chat bot. This means that almost half of responses from ChatGPT were considered to be empathetic (45%) compared to less than 5% of those from physicians."
https://link.springer.com/article/10.1007/s43681-024-00419-4
so I say "please" and "thank you" to the AI
so what if I ask it to tell me I'm a good person
who cares if it tells me I should just end it all?
"The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame" - https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147
when you do not understand how a thing is built
... you tend to overgeneralize the thing
and few, if any of us, know how LLMs/SLMs are built
... so we fall into the same trap
One of those classic "unlabled axis" graphs
often drawn as a basic "sine wave"
introduced in 1995 (!) by analyst Jackie Fenn
Technology Trigger
Peak of Inflated Expectations
Trough of Disillusionment
Slope of Enlightenment
Plateau of Productivity
Research and development
Startup companies, first round of VC funding
First-gen products, high price, large customization requirements
Early adopters investigate
Mass media hype
Supplier proliferation
Activity beyond early adopters
Negative press activity (signals start of next stage)
Supplier consolidation and failures
Second/third rounds of funding
Less than 5% of potential audience has adopted
Second-generation products/services
Methodologies, "best practices" emerge
Third-generation products/services
Out-of-the-box ready-to-go
Product suites
High adoption (20-30% of total potential audience)
we are in a time of "irrational exuberance"
the markets/VCs have oriented themselves around something very silly and very expensive
we've seen this exuberance before
voice assistants
blockchain
NFTs
but never at this scale
"Companies Built on Top of LLMs Don't Make Much Money"
"only twelve generative AI-powered companies making $100 million annualized (or $8.3 million/month) in revenue"
"there are seven companies that make $50 million or more ARR ($4.16 million a month)"
2025 Numbers
"Microsoft AI Revenue: $13b on CapEx of $80b"
"Amazon AI Revenue: $5b on CapEx of $105b"
"Google AI Revenue: $7.7b on CapEx of $75b"
"agents"
"autonomous AI that can go and do stuff without oversight"
e.g. Salesforce's "Agentforce": "deploy AI agents at scale"
honestly? chatbots
"its own research shows that agents only achieve around a 58% success rate on single-step tasks"
"On multi-step tasks... they succeed a depressing 35% of the time"
(by the way, "if-this-then-that" are not LLMs at work; those are expert systems)
nondeterministic
the "ChatGPT agent"
prepared demo: "preparing an itinerary of how to visit every (MLB) ballpark"
it missed every single ballpark on the East Coast
it added a random stadium in the middle of the Gulf of Mexico
from a pre-prepared demo
OpenAI (or other LLM) API calls
Fallacies of Distributed Computing
OpenAI's business model is... not confidence-inducing
"OpenAI currently spends $2.35 to make $1"
for OpenAI to become profitable?
Raise its prices to its consumers (e.g., AI startups)
... which its already done at least once
Keep Calm
AI is not taking over the industry, the world, or humanity
any more than it already was, anyway
Remember that AI is better know for its "winters" than its successes
a history of "overpromise and underdeliver"
anything that stuck around is no longer called "AI"
Learn how its done
Build an LLM or SLM on your own
Don't just constantly feed the big LLMs
Learn to use the dev-AI tools
We're slowly starting to figure out what works
... and what doesn't
"Copilot is like a thoughtless yet high-functioning, practically poor intern:
Great with syntax memory
Surprisingly quick at listing out your blind spots
Good at building scaffolding if you feed it the exact right words
Horrible at nuance
Useless without supervision
Will absolutely kill you in production if left alone for 30 seconds"
https://deplet.ing/the-copilot-delusion/
Brace for some chaos
The market is going to correct before long
Ensure your sources of insight are varied and diverse
Most of all, now is the time to diversify your portfolio
embrace some liberal arts topics
Political Science, Economics, Sociology, Psychology, ....
things that an LLM will not be able to integrate into your job
Become an iconoclast (someone who does things others say cannot be done)
The AI Con, by Bender, Hanna (ISBN 978-0063418554)
Smart Until It's Dumb, by Maggiori (ISBN 978-1838337230)
AI Snake Oil, by Narayanan, Kapoor (ISBN 978-0691249643)
"What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference"
Siliconned, by Maggiori (ISBN 978-1838337261)
"How the tech industry solves fake problems, hoards idle workers, and makes doomed bets with other people's money"
Anthropomorphism in AI: hype and fallacy, by Placani
https://link.springer.com/article/10.1007/s43681-024-00419-4
A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going, by Wooldridge
A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, by Bennett
Principles of Artificial Intelligence, by Nilsson
Wikipedia: Dartmouth workshop
https://en.wikipedia.org/wiki/Dartmouth_workshop
Wikipedia: History of AI
https://en.wikipedia.org/wiki/History_of_artificial_intelligence
"The Hidden Costs of Coding with Generative AI"
https://sloanreview.mit.edu/article/the-hidden-costs-of-coding-with-generative-ai/
"Gen AI: Too much spend, too little benefit?" (Goldman Sachs)
https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf
Where's Your Ed At:
https://www.wheresyoured.at/how-to-argue-with-an-ai-booster/
https://www.wheresyoured.at/the-haters-gui/
https://www.wheresyoured.at/ai-bubble-2027/