segunda-feira, 8 de setembro de 2025

AI Age - NVIDIA CFO Highlights Blackwell GB300 Ramp and Surging AI Chip Demand in Q2

 

NVIDIA CFO Highlights Blackwell GB300 Ramp and Surging AI Chip Demand in Q2

NVIDIA's CFO, Jensen Huang, recently shared insights on the company's Q2 performance, emphasizing significant growth in data center revenues and the rapid scaling of its Blackwell GB200 and GB300 AI solutions. Below are the key takeaways from the discussion, optimized for those tracking NVIDIA’s advancements in AI and data center technology.

Strong Data Center Revenue Growth

NVIDIA reported a 12% quarter-over-quarter revenue increase in Q2, driven by its data center and networking segments, even after excluding China-specific H20 AI GPUs. Looking ahead, NVIDIA projects a robust 17% sequential growth for Q3, signaling strong demand for its AI and computing solutions.

Blackwell GB200 and GB300 Scale-Up Success

The ramp-up of NVIDIA’s Blackwell GB200 network racks and GB300 Ultra has exceeded expectations. Huang described the transition as “seamless,” with significant scale and volume hitting the market. Analysts predict up to 300% sequential growth for the GB300 in Q3, underscoring NVIDIA’s leadership in high-performance AI infrastructure.

Navigating China’s H20 AI GPU Market

Despite geopolitical challenges, NVIDIA has secured licenses to ship H20 AI GPUs to key Chinese customers. While uncertainties remain, Huang expressed optimism about completing these shipments, potentially adding $2 billion to $5 billion in revenue. This reflects NVIDIA’s strategic focus on maintaining its foothold in the Chinese market amid local pushes for domestic chip alternatives.

Addressing AI Chip Competition and Power Efficiency

Recent market concerns, including Broadcom’s $10 billion custom AI chip contract, have sparked debates about cost-effective AI chips. Huang emphasized that power efficiency is critical for AI computing, particularly for reasoning models and agentic AI. NVIDIA’s focus on data center-scale solutions prioritizes performance per watt and dollar, ensuring long-term efficiency for large-scale AI clusters.

Next-Gen Vera Rubin AI Chips on Track

NVIDIA’s next-generation Vera Rubin AI chips are progressing on a one-year cadence, with all six chips already taped out. Huang highlighted early demand, noting “several gigawatts” of power needs already penciled in for Rubin-powered data centers, positioning NVIDIA to meet future AI infrastructure demands.

Why NVIDIA’s Strategy Matters

NVIDIA’s ability to scale its Blackwell GB200 and GB300 solutions, combined with its forward-looking approach to power-efficient AI systems, reinforces its dominance in the AI and data center markets. As demand for AI-driven computing grows, NVIDIA’s innovations in rack-scale solutions and next-gen chips like Vera Rubin ensure it remains a key player in the industry.

For the latest updates on NVIDIA’s AI advancements and market performance, stay tuned to xxxpctech for in-depth insights.

sexta-feira, 22 de agosto de 2025

AMD iROCm - GPT OSS 20B and Flux on a Stick: How a $100 AMD XDNA2 NPU Could Democratize AI and ROCm ecosystem

[Opinion] - Why NPUs Could Outrun GPUs in the AI Inference Race.



For years, GPUs have been the workhorse of AI. They’re powerful, massively parallel, and great at crunching the huge matrices that deep learning demands. But here’s the thing: GPUs were never designed for AI — they were designed for graphics. AI/ML just happened to fit.

Enter the NPU, Neural Processing Unit. Unlike GPUs, NPUs are purpose‑built for AI workloads. They don’t carry the baggage of graphics pipelines or shader cores. Instead, they’re optimized for the dataflow patterns of neural networks: moving data as little as possible, keeping it close to the compute units, and executing operations with extreme efficiency.

Why XDNA 2 Is a Leap Forward 

AMD’s XDNA 2 architecture, debuting in Strix Point, is a perfect example of this new breed. It delivers up to 50 TOPS of AI performance with native BF16 support, all in less than 10% of the SoCs die area. That’s roughly 20mm², or about 30 mm² if you add a basic LPDDR5X memory interface.





To put that in perspective:

  • A GPU block capable of similar AI throughput would be many times larger and draw far more power.

  • XDNA 2 achieves a 5× compute uplift over the first‑gen XDNA, with 2× the power efficiency. That means more AI work per watt, and less heat to manage.

  • Estimated 2-5W TDP, 10w if we push beyond the efficiency curve above the 50TOPS threshold.

This efficiency comes from its tiled AI Engine arrays, local SRAM, and deterministic interconnects — all designed to minimize data movement, which is the hidden energy hog in AI processing. 

Because NPUs are so compact and efficient, they scale in ways GPUs can’t. You can add more NPUs without blowing up your power budget or die size. That’s why the idea of putting an XDNA 2 into a USB stick form factor isn’t just possible: it’s practical.

I’d venture that if AMD scaled up their NPUs, say, 10× larger than current — a 250 mm² die could deliver 500–550 TOPs while consuming under 50 W. An MCM design could reach 2,500 TOPs BF16 (dense or sparse) at 200 W, outperforming all GPUs currently used for inference.

The iROCm Stick Concept - Hopefully AMD will be inspired by my idea.

Imagine a sleek red USB4/Thunderbolt stick, branded iROCm, with an XDNA 2 NPU inside and LPDDR5X memory onboard.

Two models could make AI acceleration accessible to everyone:

ModelMemoryPriceTarget Audience
iROCm Stick 8GBLPDDR5X @ 7533Mhz$100-120Students, hobbyists, AI learners
iROCm Stick 16GBLPDDR5X @ 7533Mhz$150Indie devs, researchers, edge AI prototyping

You plug it in, and instantly your laptop, even a thin‑and‑light, gains a dedicated AI accelerator. No drivers nightmare, no bulky eGPU enclosure. Just ROCm‑powered AI in your pocket. Under 10 W, portable, affordable — everyone (and their dog) can try the ROCm ecosystem and any apps AMD develops.

quarta-feira, 4 de junho de 2025

Typing test [EN-US] - Test your typing skills

 

Quick Typing Test

Typing Test

Click 'Start Test' to begin.

WPM

0

Accuracy

0%

Time

30s

segunda-feira, 17 de fevereiro de 2025

Crypto thief Paul Vernon behind Crypty scam found running new scam - China and US now united against the criminal

 The notorious crypto scammer behind the Cryptsy and Altilly exit scams has resurfaced under a new identity. This time, he operated as “Karl” from Xeggex, another fraudulent exchange that has now collapsed. Thanks to the efforts of the crypto community, he appears to have finally been tracked down—despite the FBI and U.S. authorities failing to bring him to justice.

- Who Is He?



Paul Vernon, also known as "Michael O’Sullivan" or Karl, has a long history of crypto fraud:

  • Cryptsy (2016) – Feds: Cryptsy Founder Stole Millions in Users' Cryptocurrency
    Federal authorities report that Paul Vernon orchestrated a "sophisticated theft scheme" between 2013 and 2015. Stole over $9M before fleeing to China. (The estimated total of misappropriated crypto has now surpassed $1billion.)
  • Altilly (2020) – Orchestrated a fake exchange hack, stealing $3.3M+.
  • Xeggex (2022–2024) – Followed the same pattern before shutting down.

What Happened with Xeggex?

Xeggex started as a seemingly legitimate exchange, offering free listings to attract users. However, things took a turn when:

  • The platform claimed a Telegram hack.
  • Then reported database corruption.
  • Finally, announced missing funds, with all major assets gone.

Now, Vernon is erasing traces of himself. 

BREAKING: We Found Him 

Investigators have located his mansion in Dalian, China. He is reportedly using fake passports from Ecuador and Vanuatu under the name Michael O’Sullivan. The FBI and U.S. Marshals have been informed, and discussions with U.S. authorities are underway. 

🔗 Full Investigation & Evidence (Live Updates):
👉 Click here

How You Can Help

  • If you lost funds, track Xeggex wallets and report them to major exchanges.
  • Share this with crypto communities to warn others.
  • If you have law enforcement or journalist contacts, help bring attention to this case.

The fight against crypto fraud continues—let’s make sure Vernon doesn’t escape justice again. This time we cannot let him escape and continue the cycle of fraud and theft without punishment, destroying the lives of many people.

All but one of the 17 charges against Vernon carry maximum prison sentences of 20 years, according to the indictment, which has been embedded at the end of this article.

Sources: 🚨 Paul Vernon (Cryptsy, BiteBi9, Altilly, Xeggex) – Serial Crypto Scammer and Wanted Fugitive Exposed! 🚨 : r/Bitcoin 

miaminewtimes.com/news/cryptsy-paul-vernon-stole-millions-cryptocurrency-13799886