sexta-feira, 17 de outubro de 2025

Billions keep pouring into the Red Team - Oracle unveils a 50,000-AMD-GPU supercluster.

 

Finally, competition emerges that threatens to put Nvidia on the ropes. 

Oracle and AMD to Launch First AI Supercluster with 50,000 AMD Instinct MI450 GPUs in 2026.



October 14, 2025Oracle and AMD have announced a groundbreaking collaboration to deploy the first publicly available AI supercluster powered by 50,000 AMD Instinct MI450 Series GPUs, starting in Q3 2026. This initiative, revealed at Oracle AI World in Las Vegas and Santa Clara, marks a significant milestone in their decade-long partnership to advance AI infrastructure through Oracle Cloud Infrastructure (OCI).

Unprecedented AI Performance

The supercluster, expanding in 2027, will leverage AMD’s “Helios” rack design, featuring:

This architecture ensures maximum performance, energy efficiency, and scalability for demanding AI and HPC workloads.

Why It Matters

With AI models rapidly outgrowing current infrastructure, Oracle’s supercluster addresses the need for flexible, high-performance solutions. Key benefits include:

  • Breakthrough Compute: Handles 50% larger models in-memory, reducing model partitioning.

  • Scalable Design: Dense, liquid-cooled 72-GPU racks optimize cost and efficiency.

  • Open-Source Support: AMD ROCm™ software stack simplifies migration and supports popular AI frameworks.

  • Advanced Security: DPU-accelerated networking and EPYC CPUs enhance data protection.

Building on Proven Success

This announcement builds on OCI’s prior integration of AMD Instinct MI300X and MI355X GPUs, with the latter now generally available in OCI’s zettascale Supercluster, scaling up to 131,072 GPUs. These solutions offer unmatched price-performance and flexibility for generative AI and large language models.

Industry Impact

Mahesh Thiagarajan, EVP at OCI, emphasized, “Our customers are pushing AI boundaries, and this collaboration with AMD delivers the scalable, secure infrastructure they need.” Forrest Norrod, AMD’s EVP, added, “Together, we’re accelerating AI innovation with open, optimized systems.”

For more details, visit Oracle’s AI solutions or AMD’s Instinct GPUs.

Keywords: Oracle AI Supercluster, AMD Instinct MI450, AI infrastructure, cloud computing, generative AI, high-performance computing

sábado, 11 de outubro de 2025

Google Takes Down EMUCR — Nintendo at it Again?

 

EmuCR Blog Taken Down: What’s Next for the Emulation Community?



This month we woke up sadder with the sneaky coup that came from the shadows and for reasons that are not very clear. The emulation community was recently hit with a setback as EmuCR, a key hub for emulator updates and downloads, was removed by Blogger for breaching community guidelines. The site, a go-to resource for enthusiasts tracking the latest developments in emulation software, is now in limbo as its administrators await the outcome of a re-review request.

The removal has sparked discussions about the challenges of maintaining emulation-focused platforms under strict platform policies. If the re-review fails, the EmuCR team faces two paths: finding a new hosting platform to restore their content or building an independent alternative from scratch. Both options come with hurdles—migrating to a new platform risks further scrutiny, while creating a standalone site demands significant time and resources.

For now, the emulation community is left watching closely, as the resolution could set a precedent for how similar platforms navigate content moderation in the future.

quinta-feira, 25 de setembro de 2025

Dying Light: The Beast - Hotfix 1.2.1 just dropped on Steam

Shortly after its launch and stratospheric sales success, Dying Light: the Beast has already received its first patch with specific fixes.



"Hotfix 1.2.1 for Dying Light: The Beast is now live on PC! Make sure to update your game—console updates will follow soon.

This hotfix addresses the Indoor Rain issue and fixes the Disturbed Day/Night Cycle.

Once the update is live on all platforms, the APEX Car Skin will appear in the in-game stash for anyone who pre-ordered or owns the Dying Light 2 Ultimate Edition."


Check steam to stay updated. 

terça-feira, 23 de setembro de 2025

AMD Patents High-Bandwidth RAM Architecture to Double DDR5 Speeds - Strix Halo iGPU level to the masses?

 

AMD Patents High-Bandwidth RAM Architecture to Double DDR5 Speeds

AMD has unveiled a groundbreaking patent for a new RAM architecture aimed at overcoming the bandwidth bottlenecks of DDR5 memory. The innovation, dubbed High-Bandwidth Dual Inline Memory Module (HB-DIMM), promises to double data rates to 12.8 Gbps on the memory bus, far exceeding DDR5's native 6.4 Gbps. This development comes as DDR5 struggles to keep pace with the escalating demands of high-performance gaming, graphics processors, and servers.

Key Features of the HB-DIMM Architecture

The patent introduces several advanced elements to enhance memory performance:

  • Dual-Speed Data Buffering: Multiple DRAM chips connect to data buffer chips that transmit data at twice the speed of standard memory chips, enabling non-interleaved transfers for simpler signal integrity and lower latency.
  • Pseudo Channels and Intelligent Routing: A register clock driver (RCD) uses a chip identifier (CID) bit to route commands to independently addressable pseudo-channels, boosting parallel access and throughput.
  • Flexible Operating Modes: Supports 1n and 2n modes for optimized clocking, along with programmable switches between pseudo-channel and quad-rank setups, ensuring compatibility with DDR5 standards.



According to the patent, "The memory bandwidth required for applications such as high-performance graphics processors... are outpacing the roadmap of bandwidth improvements for DDR DRAM chips." This architecture leverages existing DDR5 chips without major manufacturing overhauls, making it a scalable upgrade for future systems.

Implications for Gaming and AI

If implemented, HB-DIMM could revolutionize RAM performance in high-end PCs, AI workloads, and data centers by addressing DDR5's stagnation. AMD's move aligns with its recent patents, including blower fan designs for laptops and smart cache systems for processors, signaling a push toward next-gen hardware innovation. The biggest beneficiaries of this kind of advancement would be iGPUs, which rely on RAM as VRAM. The IP described here could be integrated into the next generation of handhelds and consoles, bringing the performance of costly high-end chips like the Strix Halos to the masses.

This patent, accessible via WIPO, raises questions about the future of RAM evolution amid rising computational needs.

terça-feira, 16 de setembro de 2025

ROCm 7.0 - Bringing proper competition to CUDA

ROCm 7.0: AMD's AI Powerhouse for Next-Gen Performance and Efficiency



In the fast-evolving world of AI, AMD is pushing boundaries with ROCm 7.0, a robust open-source platform tailored for generative AI, large-scale training, inference, and accelerated discovery. This release spotlights the new AMD Instinct MI350 series GPUs, delivering unprecedented computational power, energy savings, and scalability to meet the demands of enterprise AI workloads.

Empowering the MI350X Era 



At the heart of ROCm 7.0 is support for the MI350X and MI355X GPUs, featuring eight Accelerator Complex Dies (XCDs) with 256 CDNA 4 Compute Units and 256 MB of Infinity Cache for low-latency memory access. These GPUs introduce novel data types like FP4, FP6, and FP8, boosting throughput while slashing energy use, ideal for tackling the inference bottlenecks in modern AI models. Backed by AMD's GPU driver 30.10.0, ROCm now runs seamlessly on OSes including Rocky Linux 9, Ubuntu 22.04.5/24.04.3, RHEL 9.4/9.6, and Oracle Linux 9, with flexible partitioning for bare-metal setups.

Software Innovations Driving AI Forward

ROCm 7.0 supercharges AI frameworks with day-one compatibility for PyTorch 2.7/2.8, TensorFlow 2.19.1, and JAX 0.6.x. Highlights include optimized Docker images for efficient deployment, new kernels like 3D BatchNorm and APEX Fused RoPE, and C++ compilation via amdclang++. For inference, vLLM and SGLang now natively handle FP4 on MI350 GPUs, enabling distributed prefill/decode for dense LLMs and MoE models.

Model optimization shines with AMD Quark's production-ready quantized models, such as OpenAI's gpt-oss-120b/20b, DeepSeek R1, Llama 3.3 70B, Llama 4 variants, and Qwen3 (up to 235B parameters). Tools like Primus streamline end-to-end training and fine-tuning on Instinct GPUs, with reinforcement learning on the horizon. Enterprise features, including AMD Resource Manager for smart scheduling and AI Workbench for Kubernetes/Slurm integration, make scaling effortless.

Performance Boosts and Ecosystem Synergy

Expect major gains from the Stream-K algorithm, which auto-balances GEMM operations for peak GPU utilization without manual tweaks. Libraries like hipBLASLt, rocBLAS, hipSPARSE, and rocSOLVER now support low-precision formats (FP8/BF8) with fused operations, accelerating AI and HPC tasks. RCCL's zero-copy transfers and FP8 precision speed up multi-GPU comms, while rocAL and RPP enhance vision pipelines with hardware decoding and FP16 support.

Partnerships amplify this: Collaborations with PyTorch, TensorFlow, JAX, OpenAI, and inference engines like vLLM ensure seamless integration. Benchmarks show impressive results for models like DeepSeek R1 (FP4) and Llama 3.3 70B (FP8), with detailed metrics available in ROCm docs.

Profiling gets smarter too—ROCProfV3 and AQL Profiler add PC-sampling and SQL exports, while ROCgdb aids debugging. HIP 7.0 adds CUDA-like APIs and zero-copy GPU-NIC transfers, powered by LLVM 20.

Looking Ahead: Innovation Without Limits

ROCm 7.0 isn't just a release; it's a foundation for future AI breakthroughs. Upcoming updates include refreshed profiler UIs, AMD Infinity Storage to tackle I/O hurdles, and expanded Primus features. As an open, enterprise-grade ecosystem, ROCm continues to democratize high-performance AI on AMD hardware.

Whether you're training massive models or deploying at scale, ROCm 7.0 equips developers with the tools for faster, greener AI. Dive in and experience the difference.

Source: https://rocm.blogs

segunda-feira, 8 de setembro de 2025

AI Age - NVIDIA CFO Highlights Blackwell GB300 Ramp and Surging AI Chip Demand in Q2

 

NVIDIA CFO Highlights Blackwell GB300 Ramp and Surging AI Chip Demand in Q2

NVIDIA's CFO, Jensen Huang, recently shared insights on the company's Q2 performance, emphasizing significant growth in data center revenues and the rapid scaling of its Blackwell GB200 and GB300 AI solutions. Below are the key takeaways from the discussion, optimized for those tracking NVIDIA’s advancements in AI and data center technology.

Strong Data Center Revenue Growth

NVIDIA reported a 12% quarter-over-quarter revenue increase in Q2, driven by its data center and networking segments, even after excluding China-specific H20 AI GPUs. Looking ahead, NVIDIA projects a robust 17% sequential growth for Q3, signaling strong demand for its AI and computing solutions.

Blackwell GB200 and GB300 Scale-Up Success

The ramp-up of NVIDIA’s Blackwell GB200 network racks and GB300 Ultra has exceeded expectations. Huang described the transition as “seamless,” with significant scale and volume hitting the market. Analysts predict up to 300% sequential growth for the GB300 in Q3, underscoring NVIDIA’s leadership in high-performance AI infrastructure.

Navigating China’s H20 AI GPU Market

Despite geopolitical challenges, NVIDIA has secured licenses to ship H20 AI GPUs to key Chinese customers. While uncertainties remain, Huang expressed optimism about completing these shipments, potentially adding $2 billion to $5 billion in revenue. This reflects NVIDIA’s strategic focus on maintaining its foothold in the Chinese market amid local pushes for domestic chip alternatives.

Addressing AI Chip Competition and Power Efficiency

Recent market concerns, including Broadcom’s $10 billion custom AI chip contract, have sparked debates about cost-effective AI chips. Huang emphasized that power efficiency is critical for AI computing, particularly for reasoning models and agentic AI. NVIDIA’s focus on data center-scale solutions prioritizes performance per watt and dollar, ensuring long-term efficiency for large-scale AI clusters.

Next-Gen Vera Rubin AI Chips on Track

NVIDIA’s next-generation Vera Rubin AI chips are progressing on a one-year cadence, with all six chips already taped out. Huang highlighted early demand, noting “several gigawatts” of power needs already penciled in for Rubin-powered data centers, positioning NVIDIA to meet future AI infrastructure demands.

Why NVIDIA’s Strategy Matters

NVIDIA’s ability to scale its Blackwell GB200 and GB300 solutions, combined with its forward-looking approach to power-efficient AI systems, reinforces its dominance in the AI and data center markets. As demand for AI-driven computing grows, NVIDIA’s innovations in rack-scale solutions and next-gen chips like Vera Rubin ensure it remains a key player in the industry.

For the latest updates on NVIDIA’s AI advancements and market performance, stay tuned to xxxpctech for in-depth insights.

sexta-feira, 22 de agosto de 2025

AMD iROCm - GPT OSS 20B and Flux on a Stick: How a $100 AMD XDNA2 NPU Could Democratize AI and ROCm ecosystem

[Opinion] - Why NPUs Could Outrun GPUs in the AI Inference Race.



For years, GPUs have been the workhorse of AI. They’re powerful, massively parallel, and great at crunching the huge matrices that deep learning demands. But here’s the thing: GPUs were never designed for AI — they were designed for graphics. AI/ML just happened to fit.

Enter the NPU, Neural Processing Unit. Unlike GPUs, NPUs are purpose‑built for AI workloads. They don’t carry the baggage of graphics pipelines or shader cores. Instead, they’re optimized for the dataflow patterns of neural networks: moving data as little as possible, keeping it close to the compute units, and executing operations with extreme efficiency.

Why XDNA 2 Is a Leap Forward 

AMD’s XDNA 2 architecture, debuting in Strix Point, is a perfect example of this new breed. It delivers up to 50 TOPS of AI performance with native BF16 support, all in less than 10% of the SoCs die area. That’s roughly 20mm², or about 30 mm² if you add a basic LPDDR5X memory interface.





To put that in perspective:

  • A GPU block capable of similar AI throughput would be many times larger and draw far more power.

  • XDNA 2 achieves a 5× compute uplift over the first‑gen XDNA, with 2× the power efficiency. That means more AI work per watt, and less heat to manage.

  • Estimated 2-5W TDP, 10w if we push beyond the efficiency curve above the 50TOPS threshold.

This efficiency comes from its tiled AI Engine arrays, local SRAM, and deterministic interconnects — all designed to minimize data movement, which is the hidden energy hog in AI processing. 

Because NPUs are so compact and efficient, they scale in ways GPUs can’t. You can add more NPUs without blowing up your power budget or die size. That’s why the idea of putting an XDNA 2 into a USB stick form factor isn’t just possible: it’s practical.

I’d venture that if AMD scaled up their NPUs, say, 10× larger than current — a 250 mm² die could deliver 500–550 TOPs while consuming under 50 W. An MCM design could reach 2,500 TOPs BF16 (dense or sparse) at 200 W, outperforming all GPUs currently used for inference.

The iROCm Stick Concept - Hopefully AMD will be inspired by my idea.

Imagine a sleek red USB4/Thunderbolt stick, branded iROCm, with an XDNA 2 NPU inside and LPDDR5X memory onboard.

Two models could make AI acceleration accessible to everyone:

ModelMemoryPriceTarget Audience
iROCm Stick 8GBLPDDR5X @ 7533Mhz$100-120Students, hobbyists, AI learners
iROCm Stick 16GBLPDDR5X @ 7533Mhz$150Indie devs, researchers, edge AI prototyping

You plug it in, and instantly your laptop, even a thin‑and‑light, gains a dedicated AI accelerator. No drivers nightmare, no bulky eGPU enclosure. Just ROCm‑powered AI in your pocket. Under 10 W, portable, affordable — everyone (and their dog) can try the ROCm ecosystem and any apps AMD develops.

quarta-feira, 4 de junho de 2025

Typing test [EN-US] - Test your typing skills

 

Quick Typing Test

Typing Test

Click 'Start Test' to begin.

WPM

0

Accuracy

0%

Time

30s

segunda-feira, 17 de fevereiro de 2025

Crypto thief Paul Vernon behind Crypty scam found running new scam - China and US now united against the criminal

 The notorious crypto scammer behind the Cryptsy and Altilly exit scams has resurfaced under a new identity. This time, he operated as “Karl” from Xeggex, another fraudulent exchange that has now collapsed. Thanks to the efforts of the crypto community, he appears to have finally been tracked down—despite the FBI and U.S. authorities failing to bring him to justice.

- Who Is He?



Paul Vernon, also known as "Michael O’Sullivan" or Karl, has a long history of crypto fraud:

  • Cryptsy (2016) – Feds: Cryptsy Founder Stole Millions in Users' Cryptocurrency
    Federal authorities report that Paul Vernon orchestrated a "sophisticated theft scheme" between 2013 and 2015. Stole over $9M before fleeing to China. (The estimated total of misappropriated crypto has now surpassed $1billion.)
  • Altilly (2020) – Orchestrated a fake exchange hack, stealing $3.3M+.
  • Xeggex (2022–2024) – Followed the same pattern before shutting down.

What Happened with Xeggex?

Xeggex started as a seemingly legitimate exchange, offering free listings to attract users. However, things took a turn when:

  • The platform claimed a Telegram hack.
  • Then reported database corruption.
  • Finally, announced missing funds, with all major assets gone.

Now, Vernon is erasing traces of himself. 

BREAKING: We Found Him 

Investigators have located his mansion in Dalian, China. He is reportedly using fake passports from Ecuador and Vanuatu under the name Michael O’Sullivan. The FBI and U.S. Marshals have been informed, and discussions with U.S. authorities are underway. 

🔗 Full Investigation & Evidence (Live Updates):
👉 Click here

How You Can Help

  • If you lost funds, track Xeggex wallets and report them to major exchanges.
  • Share this with crypto communities to warn others.
  • If you have law enforcement or journalist contacts, help bring attention to this case.

The fight against crypto fraud continues—let’s make sure Vernon doesn’t escape justice again. This time we cannot let him escape and continue the cycle of fraud and theft without punishment, destroying the lives of many people.

All but one of the 17 charges against Vernon carry maximum prison sentences of 20 years, according to the indictment, which has been embedded at the end of this article.

Sources: 🚨 Paul Vernon (Cryptsy, BiteBi9, Altilly, Xeggex) – Serial Crypto Scammer and Wanted Fugitive Exposed! 🚨 : r/Bitcoin 

miaminewtimes.com/news/cryptsy-paul-vernon-stole-millions-cryptocurrency-13799886


segunda-feira, 23 de dezembro de 2024

LPPDR6 CAMM2 CAMM2will pave the way for the end of conventional dGPUs.


Compact, efficient, durable, and excellent value for money; In recent years, integrated GPUs (iGPUs) have made significant strides, thanks to advancements in semiconductor manufacturing and memory technology. Companies like Apple, Intel, and AMD have pushed the boundaries of iGPU performance, driven by:

  • TSMC's advanced lithography: Enabling denser chip designs with higher performance within a limited power budget (TDP).
  • Memory innovations from Samsung, SK Hynix, and Micron: Delivering faster memory technologies with increased bandwidth to feed these powerful iGPUs.
  • GPU Architectural improvements, including advanced data management and compression techniques. More robust APIs like DX12 and Vulkan also come into play here.

Apple has taken a bold approach, dedicating significant silicon real estate and employing high-bandwidth memory (8 memory channels, 256/196-bit bus) to achieve performance levels comparable to mid-range dedicated GPUs (dGPUs) like the NVIDIA GeForce RTX 4070 or AMD Radeon RX 7800XT. However, this approach comes at a high cost, exceeding even the most expensive mobile dGPUs (like the NVIDIA GeForce RTX 4090 Mobile). Furthermore, Apple's ecosystem limitations restrict flexibility and customization compared to Windows systems. It won't be everyone's cup of tea.

Intel and AMD, targeting a broader market, focus on more accessible solutions ranging from budget-friendly handhelds to premium laptops. This necessitates a more balanced approach, limiting the use of expensive, high-bandwidth memory configurations. Currently, they primarily rely on dual-channel LPDDR5X with up to 8000MHz, resulting in a theoretical maximum bandwidth of around 128 GB/s – comparable to a dGPU like the AMD Radeon RX 6500 XT. This bandwidth limitation restricts the performance potential of iGPUs with more than a few hundred shaders. For example, AMD iGPU with 16 compute units (RX 890M) may not deliver the expected performance due to insufficient bandwidth.

LPDDR6 promises to revolutionize iGPU performance. With a single CAMM2 LPDDR6 module offering a 192-bit bus, theoretical bandwidth can reach approximately 322 GB/s:

  • Bits to bytes conversion: 192 bits / 8 bits/byte = 24 bytes
  • Transfer rate calculation: 14,400,000,000 transfers/second * 24 bytes/transfer = 345,600,000,000 bytes/second
  • Conversion to gigabytes per second: 345,600,000,000 bytes/second / 1,073,741,824 bytes/GB ≈ 322 GB/s

This represents a 2.5x improvement in theoretical bandwidth, potentially enabling iGPUs in mainstream notebooks and handhelds to rival popular dGPUs like the NVIDIA GeForce RTX 4060 or AMD Radeon RX 7600XT. The technology behind it;


 The JEDEC JC-45 Committee is actively developing two groundbreaking memory module technologies: a new Tall MRDIMM form factor and a next-generation CAMM module for LPDDR6.

The Tall MRDIMM aims to significantly increase memory bandwidth and capacity by allowing for twice the number of DRAM single-die packages on the module without requiring 3D stacking. This innovative approach leverages a taller form factor while maintaining the existing DRAM package.

Complementing this, JC-45 is developing a cutting-edge CAMM module specifically designed for LPDDR6 operation at speeds exceeding 14.4 GT/s. This advanced module will feature a 24-bit subchannel, a 48-bit channel, and a connector array.

Both these projects are crucial for advancing memory technology and are currently under development within the JC-45 Committee. JEDEC strongly encourages industry participation to shape the future of memory standards. Membership provides valuable benefits, including access to pre-publication proposals and early insights into critical projects like MRDIMM and the next-generation CAMM.


Source: JEDEC Unveils Plans 

terça-feira, 23 de abril de 2024

Lenovo takes the lead, debuting LP-CAMM2 modules

 Tech Advancements:



Dell's development of the CAMM memory spec, donated to the JEDEC memory standard committee, laid the groundwork for innovations seen in the ThinkPad P1 Gen 7. The CAMM spec aims to shrink standard memory form factors while optimizing communication pathways between memory and host systems, leading to improved performance and efficiency. Third-party manufacturers like Adata have already demonstrated modules following the CAMM spec, with Samsung recently unveiling its initial CAMM-based products, signaling a new era in memory technology.

Lenovo Unveils ThinkPad P1 Gen 7: A Leap Forward in Laptop Innovation


Lenovo has once again pushed the boundaries of laptop technology with the introduction of the ThinkPad P1 Gen 7. Building upon the success of its predecessor, the 2024 iteration of the ThinkPad P1 Gen 7 boasts a new chassis, Intel Meteor Lake processor options, and an enhanced workstation GPU for the top-tier model. With the 2023 version receiving a commendable rating of 92%, anticipation is high for what the latest model has to offer. One of the most groundbreaking features of the ThinkPad P1 Gen 7 is its utilization of the LPCAMM2 module from Micron, a first in the laptop world. LPCAMM, or "low-power compression attached memory module," addresses the limitations of traditional SO-DIMM and soldered LPDDR memory solutions. By incorporating LPDDR and DDR memory chips operating in a dual-channel model, LPCAMM2 not only enhances speed and efficiency but also offers user-replaceable capabilities, unlike soldered LPDDR chips. Lenovo claims that the up to 64 GB LPCAMM2 module inside the ThinkPad P1 Gen 7 saves a significant 64% space compared to DDR5 SO-DIMM memory, while also requiring 61% less "active power." Furthermore, users have the option to equip the ThinkPad P1 Gen 7 with a massive 8 TB of storage through two 4 TB drives arranged in RAID 0/1 configuration. Performance-wise, the ThinkPad P1 Gen 7 can be configured with up to an Intel Core Ultra 9 185H CPU with vPro. Additionally, Lenovo offers the flexibility of choosing between regular gaming GPUs or upgrading to an RTX 4070 laptop GPU. In terms of display, the ThinkPad P1 Gen 7 features a 16-inch 16:10 panel, with the option for a touch-capable OLED screen boasting a 3,840 x 2,400 resolution, 100% DCI-P3 coverage, and 400 nits of brightness. Lower-end variants offer 100% sRGB IPS displays in 1600p and 1200p options with 500 and 400 nits of brightness, respectively. The ThinkPad P1 Gen 7 also impresses with its port selection, including two Thunderbolt 4 ports, one Type-A port, HDMI 2.1 output, and an SD card reader. With an entry price of 2,450 EUR ($2,621), the Lenovo ThinkPad P1 Gen 7 will be available for purchase starting in June 2024, promising consumers a leap forward in laptop innovation.

terça-feira, 9 de abril de 2024

New High-end Entry in AMD's Instinct lineup Emerges

 

purely illustrative image

In recent developments reported by TrendForce, the landscape of export controls has undergone a significant expansion, encompassing not only previously restricted AI chips from industry giants NVIDIA and AMD but also their forthcoming next-generation successors. Notable additions to the list include NVIDIA's H200, B100, B200, GB200, and AMD's MI350 series, extending the scope beyond the previously known NVIDIA A100/H100, AMD MI250/300 series, NVIDIA A800, H800, L40, L40S, and RTX 4090.



In response to these regulatory changes, manufacturers in the High-Performance Computing (HPC) sector have swiftly adapted by developing products compliant with the new Trade Partnership Program (TPP) and Product Development (PD) standards. Notably, NVIDIA has introduced adjusted versions of their H20, L20, and L2 models, ensuring they remain eligible for export, thus navigating the evolving regulatory landscape.

The revelation of AMD's MI350 series suggests a refresh or potentially a new chip based on the advanced 4nm architecture, slated for launch in the latter half of this year. This underscores the industry's ongoing shift towards more advanced semiconductor technologies. It's worth noting that while the industry appears to be largely stuck at the 4nm node, Apple has made headlines by making the leap to 3nm despite challenges with low yields. This divergence in technological trajectories adds an intriguing dimension to the competitive dynamics within the semiconductor sector, emphasizing the importance of innovation and agility in navigating regulatory and technological landscapes alike.


Source: trendforce.com

sexta-feira, 29 de julho de 2022

[Leak] - Ryzen 5 7600x bate 12900k em benchmarks, sendo 22% mais rápido.

A CPU Ryzen 5 7600X Hexa-Core baseado em Zen 4 da AMD apareceu mais uma vez e, desta vez, foi comparada no software UserBenchmark. A amostra(sample) de CPU "Zen 4" de 6 núcleos destruiu o i9-12900K da Intel em desempenho focado em single core, e foi até 23% mais rápido Multi-Threaded Versus o antecessor 5600X.
A CPU AMD Ryzen 5 7600X "Zen 4" apareceu dentro do banco de dados UserBenchmark e foi descoberta pela TUM_APISAK. De acordo com as informações disponíveis, o chip é uma amostra de engenharia com id '100-000000593-20_Y' da OPN. O mesmo chip tinha aparecido anteriormente dentro do benchmark da GPU Basemark rodando em uma placa-mãe Gigabyte X670E. A última entrada mostra o chip rodando na placa-mãe NZXT B650 que é fabricada pela ASRock. A plataforma estava rodando dois módulos de memória de 16 GB DDR5-4800 Em termos de desempenho, enquanto o UserBenchmark é conhecido por ser tendencioso em relação às CPUs Intel, o chip AMD Ryzen 5 7600X ainda supera o principal Core i9-12900K da Intel em testes single-core. O Ryzen 5 7600X obtém um impressionante 243 pontos, enquanto o Intel Core i9-12900K obtém uma média de 200 pontos. Este é um aumento de desempenho de um único fio 20% maior. Em comparação com o Ryzen 5 5600X, o chip Zen 4 oferece um aumento de desempenho de 55% nos testes de um só núcleo. De acordo com os rumores, As CPUs baseadas em Zen4 serão reveladas na primeira semana de agosto em um evento especial junto com as fabricantes de placas mae.

sábado, 13 de junho de 2020

[Curso] - Aprenda a criar seu app do zero e ganhe dinheiro - Android ou iOS



Quer aprender a criar aplicativos para Android do zero? Não procure mais! Nosso curso de desenvolvimento de aplicativos para Android é perfeito para iniciantes e entusiastas que desejam dominar as habilidades necessárias para criar seus próprios apps incríveis. Com a orientação de instrutores experientes, você aprenderá os fundamentos da programação, design de interface do usuário e desenvolvimento de aplicativos Android. Não perca a oportunidade de se tornar um desenvolvedor de aplicativos Android de sucesso. Inscreva-se agora!


Como obter cartão de debito com criptomoedas (Bitcoin/Ether/LTC)

1 - Basta abrir sua conta aqui: bit.ly/3cVPT8T
2 - ilimitado após verificação KYC.
3 - Valido em todo mundo.

quarta-feira, 18 de março de 2020

Ajude a combater o CoronaVirus usando seu PC !?

Todos estamos presos em casa até o futuro previsível, então o pessoal do AnandTech estão fazendo a única coisa que podem fazer: entrar em apuros e brigar. E queremos sua ajuda! Por favor, continue lendo para aprender sobre nossa corrida "amigável" com o Tom's Hardware, enquanto trabalhamos para usar o poder de distribuir computação para ajudar a combater o COVID-19.

Resultado de imagem para corona virus imagem

Durante a maior parte deste mês, tem sido difícil ter uma conversa sem invocar a palavra "coronavírus" em algum momento. O novo coronavírus, SARS-CoV-2, e a doença COVID-19 que causa, rapidamente levou a uma paralisação em grande parte do Ocidente, depois de já ter um efeito de resfriamento semelhante em partes da China no início deste ano. E enquanto as ramificações desse vírus serão sentidas por muitos meses, há um problema mais imediato e inevitável: estamos todos presos em casa, e estamos todos enlouquecendo!

A editora da AnandTech, Future Plc, fechou seus escritórios para o futuro próximo, designando todos para trabalhar em casa. Assim, todos os editores, fotógrafos e outros funcionários que normalmente estariam se reunindo no trabalho estão agora em casa – querendo ou não – e a febre da cabine é real. E esse efeito é, aparentemente, especialmente forte entre os funcionários da Tom's Hardware, nossos compatriotas leais e concorrentes.

Contra toda a lógica e bom senso, os caras da Tom's Hardware desafiaram a poderosa AnandTech para uma corrida de computação distribuída. E, claro, dado tudo o que está acontecendo, é tudo sobre coronavírus. Não é de recuar – especialmente depois de vencer nossas duas corridas anteriores – a AnandTech aceitou seu desafio, e a partir de hoje vamos correr o Tom's Hardware para ver qual equipe pode contribuir mais para encontrar tratamentos para o COVID-19!

Agora, o que é tudo isso, você pode estar perguntando? Folding@Home, um projeto de computação distribuída de longa data, adicionou recentemente tarefas de pesquisa relacionadas ao COVID-19 à sua lista de projetos. Organizado pela Universidade de Washington, Folding@Home (FAH) permite que os indivíduos contribuam com tempo de computação para os esforços de pesquisa de Stanford. Isso, por sua vez, ajuda os pesquisadores no combate às doenças que estão relacionadas a proteínas e proteínas (mis)dobráveis, como a doença de Alzheimer, a doença de Huntington e o COVID-19. Folding@Home já se passa há quase duas décadas. E junto com uma equipe de dobramento anandtech de longa data, nós até usamos em benchmarks de GPU por vários anos.

Com a pandemia global acontecendo, os pesquisadores por trás Folding@Home decidiram mudar de marcha, e usar o poder maciço de seu projeto de computação distribuída para simular O SARS-CoV-2, a fim de entender melhor como ele funciona, e, finalmente, tentar encontrar tratamentos para ele. E, dada a importância do que está acontecendo agora – e o fato de que todos nós, realmente, realmente queremos sair – parece apenas apropriado que a AnandTech e a Tom's Hardware se juntem à luta, tanto quanto um bando de nerds de tecnologia pode, de qualquer forma.

Começando hoje, 18 de março, estamos realizando uma corrida de quatro semanas dobrando para ver qual equipe é melhor. Quanto mais tempo de computador doado para Folding@Home – mais trabalho de dobramento de proteínas concluído – mais pontos uma equipe marcará, com a equipe de maior pontuação sendo coroada vencedora.

AnandTech, é claro, não é desleixo quando se trata de computação distribuída. Nossa equipe, apropriadamente chamada Equipe AnandTech, está nisso desde o final de 1998, que é quase tão longo quanto a AnandTech tem operado. Entre suas realizações notáveis está vencer os evangelistas Macintosh, Slashdot, Tweakers.net, e mais em mais de uma dúzia de projetos de computação distribuídos que vão da ciência da computação à biologia até a caça de sinais alienígenas. E, claro, nós derrotamos tom's Hardware um par de vezes também.

É por isso que estou ainda mais surpreso que tom's Hardware estava disposto a nos desafiar para uma corrida. a liderança auspiciosa de Avram terceira vez é um charme Piltch, tom's Hardware está tentando reunir suas forças para não apenas ajudar a lidar com o COVID-19, mas vencer a Equipe AnandTech no processo. E enquanto respeitamos nossos colegas de Hardware do Tom, alguém tem que colocar algum juízo neles de vez em quando. A Computação Distribuída é o território da Equipe AnandTech, e a Equipe AnandTech não será superada.

Então, estou mais uma vez chamando os leitores da AnandTech e os intrépidos membros da Equipe AnandTech a se unirem por uma boa causa: vencer o Tom's Hardware! Ah, sim, e talvez encontrar maneiras de reduzir o impacto do COVID-19 no globo em geral...

Em última análise, esta corrida é por diversão, mas também é por uma boa causa. O vírus SARS-CoV-2 é um evento que muda o mundo, e, juntamente com os riscos médicos imediatos do novo vírus, as medidas de contenção que ele requer são intensas. O projeto Folding@Home está trabalhando em várias simulações para melhorar a compreensão da humanidade sobre o vírus e a doença que ele causa, com o objetivo de iniciar novos tratamentos e trazer o vírus controle. É uma causa digna, como resultado eu gostaria de encorajar todos a participar em nossa corrida durante o próximo mês.

Os detalhes completos sobre o concurso, incluindo como baixar o Folding@Home cliente e participar da Equipe AnandTech, nossa equipe de computação distribuída, podem ser encontrados aqui: https://forums.anandtech.com/threads/anandtech-vs-toms-hardware-folding-home-coronavirus-race-thread.2578326/




Devo notar também rapidamente que, como o projeto ganhou muita atenção nos últimos dias (vemos você lá, PCMR!), os próprios servidores Folding@Home estão um pouco de carga enquanto a equipe por trás dele trabalha para manter novas unidades de trabalho fluindo. Então, se você não está recebendo imediatamente unidades de trabalho COVID-19 depois de configurar o cliente, não se desespere. Unidades de trabalho estão saindo, e qualquer (e todos) trabalho Folding@Home conta para esta corrida.

Finalmente, uma vez configurado, certifique-se de cair em nosso fórum de computação distribuída e dizer olá. O capitão da equipe está acompanhando quantas pessoas se inscrevem, e é o melhor lugar para ir para se conectar com os outros membros da equipe e obter respostas para quaisquer perguntas.

sábado, 8 de fevereiro de 2020

Blockchain & Aplicativos Mobile - infográfico

Blockchain & Aplicativos Mobile: Você já pensou como nosso ambiente emocionante e agradável se tornou com a chegada de vários tipos de aplicativos móveis?  Esses aplicativos estão fazendo do mundo um lugar menor através da comunicação, educação, finanças, etc.

Ao mesmo tempo, eles estão experimentando de diferentes maneiras para tornar o aplicativo móvel mais próximo das pessoas. Agora, você pode acessar aplicativos blockchain em vários campos, como saúde, educação, apostas esportivas, bancos, imóveis, etc. Mas a questão surge aqui é que eles estão seguros? Que medidas eles tomam para proteger nossos dados?

Fonte:  https://acmarket.biz/

sexta-feira, 23 de agosto de 2019

Intel ataca AMD - Ryzen 3000 é propaganda enganosa.

A AMD mentiu para os consumidores ?

Após as muitas polemicas recentes,   Eis que a intel emerge novamente, dessa vez para implicar com as especificações de boost clock usados pela AMD no Ryzen 3000. A intel está tentando educar a imprensa sobre CPUs AMD e seus Boost. Em um deck de slides recém-lançado, a Intel incluiu cotações de revisões que mencionaram que as CPUs da AMD não atingiram os boostclock anunciados em todos os núcleos. Isso pode até ser verdade, mas não muda o fato que as pessoas tem em mente que os chips AMD Ryzen são mais competitivos em relação a preço e performance quando comparados com CPUs Intel:

Intel AMD boost clocks

Faz sentido escolher os núcleos que são melhores e tentar impulsionar a maior frequência possível. Porém, O Boost Clock não é algo ideal para ter em todos os núcleos em todos os momentos. Você pode conferir o slide Intel que menciona AMD marketing falso sobre o Boost na imagem acima. É uma pena que uma empresa como a Intel esteja tentando disparar tiros na AMD desta forma. Intel é uma empresa maior e deveria agir melhor. AMD sendo menor desempenha de forma mais eficiente que a intel, mas em vez de trabalhar e melhorar seus produtos, corrigindo seu processo de 10nm quebrado a anos, eles escolhem usar esse marketing arrogante e tendencioso. Esses slides afirmam que supostamente a AMD está tentando enganar os consumidores quando se trata de impulsionar o clock de suas CPUs.

Analisando o outro lado, vemos que a intel mente sobre o TDP de seus processadores, o 9900K é documentado com TDP 95w. Mas em testes de consumo e emissão de calor, chega próximo dos 200W. Então é obvio que a intel está sendo hipócrita nesse discurso.

"A AMD só ganha no CineBench, em aplicações do mundo real nós temos a melhor performance" - Intel

De acordo com os padrões da intel, as aplicações do mundo real são "os aplicativos mais populares que estão sendo usados pelos consumidores". A finalidade destes testes era fornecer usuários o desempenho real nos aplicativos que usariam um pouco do que aqueles que visam um determinado nicho. Intel afirmou que, enquanto Cinebench, um benchmark popular usado pela AMD e também pela Intel para comparar o desempenho de seus processadores, é amplamente utilizado pelos revisores, apenas 0,54% do total de usuários o usa. Infelizmente para a intel isso não quer dizer nada, pois a aplicação real que o cinebench retrata é o Cinema 4D, software bastante popular e muito usado ainda, eles também não incluíram o blender 3D. A verdade é que a maioria dos softwares da lista são otimizados para usar um núcleo ou irrelevantes para benchmark como "Word e Excel" - quem liga pra isso ?





 Há um ano, quando introduzimos o i9 9900K ", diz Troy Severson da Intel,  "foi apelidado de CPU mais rápido do mundo para jogos. E posso dizer honestamente que nada mudou. Ainda é um CPU de jogos mais rápida do mundo. Eu acho que você já ouviu um monte de imprensa da concorrência recentemente, mas quando vamos sair e realmente fazer o teste do mundo real, não os benchmarks sintéticos, mas fazendo testículos do mundo real de como estes jogos executam em nossa plataforma, nós empilhar o 9900K contra o Ryzen 9 3900X. Eles estão executando uma parte de 12 núcleos e estamos executando um oito-núcleo.

 "Então, novamente, você está ouvindo um monte de coisas de nossa concorrência ", diz Severson.  "Eu serei muito honesto, muito contundente, digamos, Hey, eles fizeram um ótimo trabalho fechando uma lacuna, mas ainda temos como CPUs de maior desempenho na indústria para jogos, e nós vamos manter essa vantagem. " 


Eu acho incrível o entusiamo que eles comemoram uma vantagem de 2~6% em jogos, enquanto perdem por massivos 20~30% em aplicações com MT pesado. Não são bons tempos para o lado azul.

fonte: xfastest pcgamesn