Main menu

Pages

The first Saudi artificial intelligence... and a company wants to buy Google ! | Latest tech news

 

The Dawn of a

It’s August 2025 — the world feels different. Technology no longer evolves in decades or years; it mutates in weeks. From the shimmer of holographic glasses to artificial minds that reason, remember, and create, humanity is crossing another invisible threshold. Every month now feels like a leap across eras. In this month’s roundup of the most remarkable tech stories, we witness a wave of innovation that blurs reality itself: Meta’s holographic revolution, Google’s universe-creating AI, OpenAI’s long-awaited GPT-5, and the first fully Arabic artificial intelligence built in Saudi Arabia. It’s not just another cycle of updat



The Futur

When Meta announced a collaboration with Stanford University to develop the world’s first true hologram glasses, many assumed it was another VR stunt. But this time, it’s different. Until now, all “virtual reality” and “augmented reality” headsets — from Meta’s own Quest line to Apple’s Vision Pro — relied on tiny internal displays tricking your brain into seeing depth. What you saw wasn’t real depth, just optical illusions layered with clever software. But Meta’s new prototype changes everything. Using a breakthrough in light field manipulation, the glasses don’t just simulate depth — they recreate it. Each photon of light is rearranged to mimic how we perceive real-world objects, producing a genuine three-dimensional hologram floating in front of your eyes. No more heavy headsets, no more dizziness, no more “screen windows” pretending to be portals.


These glasses are astonishingly thin — just 3 millimeters thick — looking almost like normal eyewear. The engineering challenge was monumental: to fit a dynamic holographic projector into a frame slimmer than a pencil. Yet, the early demos show holographic objects that seem physically tangible, visible even under daylight. Of course, Meta stresses that this is a research prototype. It might take several years before reaching the public. But the direction is clear. We’re heading into a cyber-physical world, one where the line between atoms and light collapses. For now, the dream is still confined to labs. But if Meta’s pace continues, the day may come when we slip on glasses that project not just augmented data, but an entirely new layer of existence.


Google Gen 3: Building Worlds Out of Words

While Meta reshapes light, Google decided to reshape reality itself. At its secretive labs, Google unveiled Gen 3, an artificial intelligence system that can generate fully interactive worlds. Imagine typing a sentence — “Create a desert planet with floating cities and twin suns” — and instantly walking through it. Not watching it, not playing it, but existing inside it.

Gen 3 represents a quantum leap from its predecessors. Earlier experimental models could design static environments or short, limited scenes. But Gen 3 adds two monumental abilities: Memory — it remembers every object, event, and action within the created world. Persistence — when you leave and return, the world remembers what happened. This might sound trivial, but it’s a fundamental shift in how digital environments behave. In older AI worlds, once you exited, everything reset. With Gen 3, reality becomes continuous — like Minecraft built by a god who never forgets.


The potential uses go far beyond gaming. Architects can prototype cities, scientists can simulate ecosystems, and storytellers can live inside their narratives. For education, medicine, and design, this is an unprecedented playground. But there’s a darker undertone too. When an AI can generate and sustain entire worlds, where does the human imagination fit in? And when those worlds feel real, who controls their laws — the coder or the algorithm itself? For now, Gen 3 is still under controlled testing, but the footage shown at Google’s 2025 developer showcase left audiences stunned. “It doesn’t just make games,” one engineer said. “It makes realities.”


OpenAI’s GPT-5: The Calm Before the Storm

Two years after GPT-4 redefined artificial intelligence, OpenAI has finally released GPT-5 — a model that is less flashy than many expected, but far more refined. With over 800 million monthly users relying on its technology, expectations were astronomical. Everyone wanted a model that could reason like a human, code like a machine, and write like a poet. But OpenAI took a quieter approach. Instead of reinventing the wheel, they perfected it.


GPT-5 is faster, more memory-efficient, and dramatically less prone to “hallucinations” — those confident yet false statements that plagued previous versions. It also introduces a subtle but powerful innovation: the ability to automatically choose the right internal model for each task. Instead of forcing users to pick between GPT-4, GPT-4-turbo, or GPT-3.5, the system now decides in real-time which cognitive layer to use — optimizing for speed, accuracy, or creativity depending on the context. For the first time, users can rely on AI to manage its own intelligence. Even more surprising, OpenAI announced the release of its first open-source models in six years — a gesture of goodwill (and a nod to critics who mocked the irony of “Open” AI being closed for so long). Developers can now experiment with these smaller models locally, paving the way for a new era of decentralized innovation. Yet despite the excitement, a question lingers in the air: is GPT-5 the peak of this generation, or merely the calm before a storm of even more powerful systems?


When AI Meets Absurdity: Perplexity’s Wild Move

While the giants of Silicon Valley were busy pushing science forward, one company decided to go for a moonshot — or perhaps a comedy sketch. In a story that went viral overnight, Perplexity, a relatively young AI-powered search browser, reportedly offered to buy Google Chrome for $34.5 billion. The internet collectively laughed. Chrome, the world’s most popular browser with over a billion users, is practically priceless to Google’s ecosystem. Meanwhile, Perplexity’s total market valuation? Around $18 billion. It’s like offering half the value of your company to buy a product that dwarfs you a hundredfold. “It’s like trying to buy a Tesla with lunch money,” one tech analyst joked. Of course, no one expects the deal to happen. But the gesture signals something deeper — a growing ambition from smaller AI startups who believe they can outthink, if not outspend, the tech giants. If the last two years have proven anything, it’s that innovation no longer belongs only to the biggest names. A good model, a bold idea, and a viral moment can change the hierarchy overnight.



Google’s 2025 Hardware Showcase: The Pixel Decade

On August 20, 2025, Google hosted its annual “Made by Google” event — a date that has quietly become as iconic for Android fans as Apple’s September keynote is for iPhone lovers. This year marked the 10-year anniversary of the first Google Pixel, and the company celebrated in style. The lineup included Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, Pixel 10 Pro Fold — the first fully foldable, water-resistant Pixel (IP68-rated), Pixel Watch 4, Pixel Buds 2, and a new AI-powered Gemini Smart Speaker, Google’s response to Amazon’s Echo.

Each device will receive seven years of software support — a major leap for Android reliability. Prices range across markets, but early reviews highlight sleek design, better battery optimization, and deeper AI integration via Gemini, Google’s central intelligence hub. The foldable Pixel, in particular, drew applause for being thinner, tougher, and more practical than previous folds. Reviewers called it “the first foldable that doesn’t feel experimental.” Even more impressive was the Gemini integration: every device, from watch to speaker, communicates fluidly through the same AI ecosystem. You can ask your Pixel Watch to draft an email, and your Gemini speaker finishes it aloud. The era of isolated gadgets is ending; the age of synchronized intelligence has begun.


When Deleting Emails Saves Water: Britain’s Unusual AI-Eco Initiative

In the UK, a collaboration between the government, environmental agencies, meteorological offices, and water utilities unveiled an initiative that sounds almost absurd at first: delete your old emails and photos to save water. The connection might seem tenuous, but it’s scientifically grounded. Data centers require massive cooling systems to keep servers operational. These systems consume enormous quantities of water, sometimes enough to rival municipal water usage in small cities. By reducing unnecessary data storage, the campaign argues, water usage drops — a small step for the individual, a measurable impact at scale.


It’s an illustration of the hidden environmental footprint of the digital world. Every unread email, every forgotten photo, every dormant backup consumes real resources. Critics have called it a gimmick; supporters see it as a clever way to embed environmental consciousness into daily habits. Either way, it’s a reminder that the digital revolution doesn’t just alter how we work and play — it touches the most fundamental aspects of life, including something as basic as water.


Human Chat: Saudi Arabia’s Leap into AI

Amid the noise of global tech giants, a quieter revolution emerged in Riyadh, Saudi Arabia. On August 25, 2025, the Human Company unveiled Human Chat, the first fully Arabic artificial intelligence system. Unlike most AI models, which are predominantly Western and designed around English-language datasets, Human Chat was built for Arabic speakers, taking into account local dialects, Islamic values, and cultural nuances. It’s powered by a large language model called Alam, designed to reflect Saudi and broader Arab societal contexts.


The implications are profound: for the first time, AI can interact fluently with users in Arabic without losing subtlety or cultural relevance. Applications range from educational tools and business solutions to voice-controlled assistants, capable of understanding regional idioms and even humor. Human Chat represents more than technological advancement; it’s a strategic cultural statement. By developing AI locally, Saudi Arabia asserts its position not just as a consumer of technology, but as a creator. Hosting is entirely local, addressing privacy concerns and ensuring sensitive data doesn’t leave national borders. Although currently limited to Saudi users, expansion into the Middle East and beyond seems inevitable.


Google’s Nano Banana: Image Generation Redefined

While Human Chat reshapes language, Google’s Nano Banana, part of the Gemini 2.5 Flash Age model, is redefining visual creativity. Unlike conventional image-generating AI, Nano Banana emphasizes character consistency. Users can now merge multiple images without losing the identity of any included subject. Objects, people, and backgrounds retain coherence — a massive improvement over older AI, which often distorted or reinvented elements unexpectedly. Nano Banana also supports image editing in seconds, not minutes, making it not just faster but smarter. Designers, content creators, and hobbyists can explore visual storytelling without being bogged down by long rendering times or inconsistencies.


AI in Real-Time Play: From Concept to Reality

The Gemini system integrates AI across multiple platforms, allowing for instant real-time world building. This represents a broader trend: AI is no longer confined to analysis or static outputs. It’s becoming dynamic, interactive, and persistent. During demos, users asked AI to draw a complex scenario — a white cat battling an orange rabbit in a pixelated world. Within seconds, the AI rendered the scene, respecting all creative constraints. Characters were added, backgrounds adjusted, and the overall scene maintained