Discussion about this post

User's avatar
Seremonia's avatar

ARTIFICIAL INTELLIGENCE, ROBOTS & THE NEW PARADIGM OF REASONING SINGULARITY

News About AI System ChatGPT-5

{... "GPT-5 Stumbles: OpenAI Faces Setbacks in Next-Gen AI Development: OpenAI's ambitious GPT-5 project faces significant hurdles. Development is behind schedule, and the current results don't justify the immense costs. The model, codenamed Orion, has shown improvements but not the revolutionary leap expected. To overcome these challenges, OpenAI is exploring new strategies, including hiring humans to generate data and utilizing synthetic data created by another of its models." ...}

This aligns with my previous analysis that AI systems initially rely heavily on data.

As they approach a crisis of data scarcity, the development of subsequent versions will slow down.

This is because, statistically, they require a continually growing volume of data. Here, their limitation will become apparent when?

Interplanetary Data

The demands exceed the support of current infrastructure.

When humanity begins interplanetary exploration, it will show that current AI systems are merely warming up (despite their undeniable practical benefits).

Eventually, data will grow beyond the scope of Earth, while the training costs of these systems will inflate to nearly infinite levels, far surpassing the actual needs.

At this juncture, AI development will either slow to a halt, or a new approach will emerge, where AI systems must mimic human thought processes.

Similarity

There are fundamental similarities between current AI and the human brain. Technically, AI requires data, while humans also need data, but it is intuitive (prepared from birth) — reflexive or spontaneous data. This data is naturally utilized by the brain to analyze and predict sensory inputs.

The process is similar: we sense (like a CCTV), then the brain's neural network analyzes the sensory input to match it with pre-existing data (acquired through statistical analysis to identify pattern boundaries).

The matching process involves statistical estimation ("feeling out") and finding where it aligns.

For example, we learn what a mango is, observe it from various angles (right, top, bottom, left, front, and back — rotating in nearly every direction) to record its shape, color, and contours.

After sufficient observation data is collected, millions of detailed images or information about the mango are stored. Later, in practice, AI observes a certain shape (via CCTV, or sensory inputs like human sight or touch), and the perceived details are matched to similar details in the data indicating a mango’s characteristics. If the scanned result aligns closely with mango-like features, AI will identify it as a "mango."

Humans do the same: learning, memorizing details, storing them, and later using stored data to find similarities in what they see or touch.

THIS IS THE SIMILARITY, BUT THE DIFFERENCE CREATES A GAP BETWEEN AI AND HUMANS AS VAST AS EARTH AND SKY. WHY IS THIS?

See further explanation here https://open.substack.com/pub/metaphilosophy/p/artificial-intelligence-robots-and?r=1awqlr&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

Expand full comment

No posts