In Today’s AI World
contribute to Joel Hynek’s presentation in Chengdu Golden Panda Film Festival
by Craig S. Talmy

I am painfully aware that his email (in today’s AI world) has taken way too long to deliver to you. Many things in my first paragraph were outdated by the time I wrote that second paragraph. The trouble is, everyday there is a new development — not a real development but the hype and promise of a development.

So what I’ve tried to do is to give you a basic overview of the state of useful AI as well as some promise. Personally, I am inching towards becoming an enthusiastic about it all (more on that later*). Much of this is my opinion (which is of course absolutely correct), but I would not actually call this a thoroughgoing overview — just too many avenues.

A Little Background

There is no actual Artificial Intelligence (AI). There is only programming to present the illusion of thinking and or illusion of evaluation. AI is just sophisticated pattern matching, no thinking, no reasoning. AI can only do tasks accurately up to a certain degree of complexity, then FAIL!

Most if not all AI is built on Large Language Models (LLM). LLM’s are processed by Large Reasoning Models (LRM). LRMs uses a program model called Chain of Thought (CoT) which facilitates problem solving by guiding the model to articulate reasoning steps. By definition, Large Reasoning Models (LRMs) are Large Language Models (LLMS) focused on step-by-step thinking, or Chain of Thought (CoT) —I’m already exhausted.

Imagine a computer programed to be aware of, and understanding the definition of every word in every dictionary as well as the entirety of the internet (regardless of content relevance or reliability). All this sets up a herculean basket of variables (reliability issues). All this programming is intended simulate problem solving and human like understanding.
As an example, you might ask A.I. to provide you an image of a child in a field of flowers with a bright sun. The image produced will have the sun in the sky because we didn’t spell sun -son. But the little girl in a field, could cause the little girl to be inside of the field or in the flowers in the field. Most generative AI that is any good takes thousands of attempts and hundreds of recurring-refocused prompts to render the imagery you had in mind at the time of your request.

AI Search Engines

You’d think search engines like Google is where AI — the ultimate summarizers — would excel, but when AI chatbots like ChatGPT and Gemini first launched, it became evident that these tools were well-suited for tasks such as drafting customized emails, refining code, and more — because those tasks are based in rules. However, Google search still performed better at tasks like checking live scores or reading about a recent event.

While AI chatbots have evolved significantly since their initial launch, Google Search remains a more reliable tool for finding accurate information. Firstly, you get to decide your sources, and secondly, AI chatbots can hallucinate (head down the wrong path then start to combine multiple sources and topics till there is no sense at all) and answer inaccurately. That said, while having control over the sources is beneficial, even though browsing through multiple sources can be a time-consuming task.

One current alternative to this pain point that is Perplexity. Instead of imitating a human-like response, Perplexity reads through web sources and provides a source-backed summary, somewhat similar to Google’s AI overviews — but better. While it can still perform typical AI chatbot tasks like writing an email, generating images, and more, its specialty lies in offering an alternative to traditional search. Perplexity AI once had a clear edge over tools like ChatGPT, because it was devised as a summarizer — not intended to imitate human thought like ChatGPT.

ChatGPT is the big star in AI because it is designed to provide human-like responses and simulate a human to human interaction. And people feel like chatting to an AI bot feels like magic. *Back to my second paragraph, recently I have struggle to find a solution to a technical issue we have. I’ve read everything I could find. I spoke to every expert I could get to talk to me. After nearly a year, I was not one step closer to a solution. Then I asked the AI robots, “why can’t I make a thing that does this?” ChatGPT sent me pornography. Google said there is no know mention of such a device in all on the known history of the universe. Gemini told me that such a product does exist, but that there are no words to describe such a tool, and that it is beyond human intelligence to understand even the explanation. Perplexity told me that I could make one, but gave me a list of several respected companies who have made them for years! Perplexity is free through any browser. Give it a try.

I recently had to write a speech for a friend — for him to give at a film festival. The topic was The Cultural Impacts of AI (in terms of AI generated films). Here are a few severely edited paragraphs that might be relatable for you — think literature, libraries (which soon will be found only in on-line history blogs).
(Please excuse any slop here, I’m trying to make these lengthy paragraphs as efficient as I can)

A.I.’s Influence On Entertainment, Cultural Evolution, Artistic Expression and
Expanded Access and Creative Inclusion

A.I.’s influence on entertainment is expansive, with profound implications for culture, society, and creativity. The growing use and popularity of A.I. entertainment creation is driving major social and cultural changes, less in how content is made, more the it is the shifts in who can participate, and how people interact with media and each other.

A.I. is democratizing content creation, enabling a wider range of people to produce art, music, films, and literature. Traditional ideas about creativity are being challenged, as A.I. tools blur the line between human and machine-made art.
NOTE: Currently I don’t think that there is any actual ART or Literature generated by AI — just summaries of art or literature. See the current cover of Time Magazine: https://www.geekwire.com/2025/times-100-most-influential-people-in-ai-includes-tech-leaders-with-seattle-and-pacific-nw-roots/

This “art” is fun to look at, and it does fill the brief – it is actually AI generated, with the prompt of — Create a large-scale abstract art piece using only previous covers (5000) of Time Magazine. It’s cool, very interesting, and impressive to be as coherent as it is.

In Linda (Linda S. Wall, PhD) and my work worlds, we see hundreds of jobs going away. We hate the loss of human artists and writers in the job markets. But the bean-counters and hype-drivers are all going to want to save money and excitement people with new tech. Sadly, I expect the resulting content will be bland at best. Once the splash of AI animation comes out, and the movies don’t do well, people (and hopefully executives) will remember why they pay humans. When I would do the animation or effects for a movie I might have 65 animators and 600+ artists on the job. Every one of those people brought their unique vision, talent and ideas to their task — that all makes for a much better result.

A.I. tools will lower barriers to entry for media creation, enabling people from diverse backgrounds, geographies, and skill levels to contribute to culture. New voices and perspectives that were previously underrepresented or excluded from mainstream media channels now will have access. Communities can develop their own stories, music, art, and cultural artifacts more easily — If you accept that telling computer to do something for you is the same as actually develop their own stories, music, art, and cultural artifacts.

AI will foster creative empowerment and self-expression — which is fantastic. AI enables more people to participate as both creators and consumers by lowering cost and technical barriers to production and distribution. However, as an example: AI-generated music, indie videos, or art and literature produced without human man-hours and effort or without expensive studio equipment, might bring cultural experiences to broader audiences.

There is a reason that expensive studio equipment and armies of talented artists exist — the resulting product is the contribution of each artist and the quality of the equipment. Imagine recording a song in Peter & Jolean’s living room. Does that yield the same listener experience as a music recorded in a professional environment? What if Jolean & Peter are the writers and vocalists (god forbid)? What if Jolean & Peter ask and AI bot to write the song, generate the music, and have a synthetic voice sing — all easily possible, but should Stevie Wonder or Elton John start to worry? Not so much. But what can happen to the music of Thelonious Monk or John Coltrane in the hands of Peter & Jolean? We aren’t talking about Tatyana Ali, or Beyoncé, or Deja Vu — all who have had big hits that rely, heavily in some cases, on an old Steely Dan tune, we envision an AI world where that same song is plagiarized to a point where the original has been diluted into oblivion or re-appropriated into a Trump battle hymn — because there is no longer any ownership or custodianship. This a huge problem for me, AI, like so many technologies, put into the hands of ordinary citizens immense power to do so little with. I think one day the value of originality and authenticity will return as audiences seek meaning and shared experiences beyond commodified, endlessly algorithm-optimized content.

Changing Social Norms and Interactions, Cultural Representation and Bias

Barriers between creators and audiences are dissolving, making interaction more fluid — which is good. But A.I.-generated entertainment can create something akin to “echo chambers” by personalizing content feeds and recommendations, reducing exposure to diverse perspectives and experiences (a huge negative).

Social platforms powered by A.I. are shifting how people connect and communicate, sometimes leading to more superficial or transactional relationships. A.I. models can amplify or perpetuate cultural biases, depending on their training data. This can affect how different cultures are represented and perceived in media, potentially shaping attitudes and beliefs subtly over time. There is growing pressure for culturally sensitive A.I. systems and more inclusive curation and taste-making — Taste Making! We already have synthetic influencers (digital humans, usually pretty girls, telling us what shampoo to use.

Increased Cultural Representation, Support for Cultural Preservation

Easier access to sophisticated tools means traditionally marginalized groups can showcase their heritage, languages, and experiences through new media forms. This helps diversify cultural narratives and supports cross-cultural dialogue, supporting understanding and appreciation of different cultures. AI can help preserve, document, and revive endangered languages, art styles, and traditions through digital means, making cultural heritage accessible to future generations. Automated translation, AI restoration, and re-creation of traditional works all contribute to sustaining intangible cultural assets — all good things

Innovation and Evolution of Cultural Forms, Economic and Ethical Impacts

With more creators participating, cultural innovation accelerates. AI-driven experimentation can lead to hybrid forms of art, music, and entertainment, challenging conventions and evolving cultural practices faster than before. The blending of computer-generated and organic content is prompting new aesthetic and artistic standards. All of which will cqause even more debate over copyright, identity, and the rights of performers, particularly as it enables digital replicas and synthetic performances. There are calls for transparency in how content is made and for new regulations to protect creative professionals, privacy, and consumer trust. Did you know that you can buy a digital Bruce Willis to star in your movie?

Some Bullet Points

  • Democratization of content = Broader participation, new voices
  • Personalization = “Echo chambers”, reduced diversity
  • Rapid cultural evolution = New forms/values, accelerated change
  • Increased authenticity demand = Value in organic/human creativity
  • Bias amplification = Risks in cultural representation
  • Creator-audience dynamics = Fluid but challenging boundaries
  • Economic/ethical concerns = Copyright, identity, consent issues

Despite Billions In Investment, Large Reasoning Models Are Falling Short

In June, Apple released a paper, The Illusion of Thinking: Understanding the Limitations of Reasoning Models via the Lens of Problem Complexity. It examines the reasoning ability of Large Reasoning Models (LRMs) such as Claude 3.7 Sonnet Thinking, Gemini Thinking, DeepSeek-R1, and OpenAI’s o-series models — how they think, especially as problem complexity increases. Despite the increasing adoption of Generative AI and the adoption and the presumption that AI will replace tasks and jobs at scale, these Large Reasoning Models are falling short. As part of the study, researchers created a closed puzzle environment for games like Checkers Jumping, River Crossing, and Tower of Hanoi, which simulate varied conditions of complexity. They all failed.

Why Machines Aren’t Intelligent

OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the “IMO gold LLM”, has achieved gold medal level performance at the 2025 International Mathematical Olympiad (IMO).

Unlike specialized systems like DeepMind’s AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine. As OpenAI researcher Noam Brown put it, the model showed “a new level of sustained creative thinking” required for multi-hour problem-solving. But Lets Remember, there is NO thinking. They need to say thinking to get media buzz. CEO Altman said this achievement marks “a dream… a key step toward general intelligence”. But it’s more accurate to say that the model showed a new level of sustained step-by-step “if this then that” processing. Still an achievement, just not thinking. And the comment that they met the needs for multi-hour problem-solving, that will all be moot when quantum computing becomes more reliablly capable and mainstream.

Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic. Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks — which even Linda and I regularly use at home), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed. These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans.

With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly. Yet still this would be a mistake. Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for “intelligence”, let alone for incredible intelligence.

The fundamental distinction lies in several key characteristics that machines demonstrably lack. Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations. Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced.

Intelligence Illusion: What Apple’s AI Study Reveals About Reasoning… The Great AI Deception

The gleaming veneer of artificial intelligence has captivated the world, with large language models producing eloquent responses that often seem indistinguishable from human thought. Yet beneath this polished surface lies a troubling reality that Apple’s latest research has brought into sharp focus: eloquence is not intelligence, and imitation is not understanding.

Apple’s new study, titled “The Illusion of Thinking,” has sent shockwaves through the AI community by demonstrating that even the most sophisticated reasoning models fundamentally lack genuine cognitive abilities. This revelation validates what prominent researchers like Meta’s Chief AI Scientist Yann LeCun have been arguing for years—that current AI systems are sophisticated pattern-matching machines rather than thinking entities.

Breakthrough Apple study shows advanced reasoning AI doesn’t actually reason at all

With just a few days to go until WWDC 2025, Apple published a new AI study that could mark a turning point for the future of AI as we move closer to Artificial Generative Intelligence (AGI). Apple created tests that reveal reasoning AI models available to the public don’t actually reason. These models produce impressive results in math problems and other tasks because they’ve seen those types of tests during training. They’ve memorized the steps to solve problems or complete various tasks users might give to a chatbot.

But Apple’s own tests showed that these AI models can’t adapt to unfamiliar problems and figure out solutions. Worse, the AI tends to give up if it fails to solve a task. Even when Apple provided the algorithms in the prompts, the chatbots still couldn’t pass the tests. Apple researchers didn’t use math problems to assess whether top AI models can reason. Instead, they turned to puzzles to test various models’ reasoning abilities. The tests included puzzles like Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. Apple evaluated both regular large language models (LLMs) and large reasoning models (LRMs) using these

By the way, Apple has some skin in the game of poo-pooing AGI. Currently their version of AGI is less than impressive.

Prompt engineering is dead. Long live context engineering!

A long used trick from computer manufactures is to engineer their machines to “Feel Faster”. They didn’t process much faster, but the user input / computer interaction was quicker and therefore the machine was faster… not! Benchmark test proved that even though a mouse click response felt faster, the actual computing process was not all that faster. Why do we care about that?.. For a while, prompt engineering felt like strategy. Craft the perfect input, unlock perfect output. Adjust tone there and suddenly your chatbot sounds like a senior marketer. A productivity revolution. A creative partner. Maybe even a competitive edge. But it wasn’t. It was a placeholder—an interface trick for extracting meaning from a system that knew nothing about your business.

But prompting became popular not because it worked, but because it was the only tool available. It gave us the illusion of control while hiding a more significant truth: AI that doesn’t understand your context will never deliver your strategy — and now the limitations are showing.

AI can’t scale relevance

Prompt-based tools scale content, but not relevance. They move faster, but not smarter. Ask them to reflect your differentiated value prompt, pricing rationale and compliance nuance—and they improvise. Eloquently. Confidently. Wrongly. What happens when you scale improvisation? You multiply risk.

Curb your enthusiasm

OpenAI’s latest model GPT-5 it was said to be smarter than GPT-4, but not by much. In truth the user base revolted as they hit stumbling blocks and failures. OpenAi returned to GPT-4 the same day.

After all is said and done, we all need to embrace AI / AGI. It’s here to stay, and soon it will even be useful on a daily basis. Even in the world of Big Time Hollywood Visual Effects.

Also Spock Joel Hynek: Generative AI’s Impact to Film Making, Chengdu Golden Panda Film Festivel ‘2025
Craig Talmy and Joel Hynek meeting in Los Angeles, September, 2025, photographed by Vista Annam

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>