NVIDIA AI’s cover photo
NVIDIA AI

NVIDIA AI

Computer Hardware Manufacturing

Santa Clara, CA 1,701,803 followers

About us

Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.

Website
https://nvda.ws/2nfcPK3
Industry
Computer Hardware Manufacturing
Company size
10,001+ employees
Headquarters
Santa Clara, CA

Updates

  • View organization page for NVIDIA AI

    1,701,803 followers

    🦞 Build-a-Claw is just getting started. Dive deeper with two livestreams next week: 📆 3/31, 11 AM PDT — Nemotron Labs: Build & scale your own claw with Nemotron 📹 YouTube: https://lnkd.in/ecRcZTZd 📍 AddEvent: https://lnkd.in/eEmkZ885 📆 4/1, 11AM PDT — Build a Physical AI pipeline on DGX Spark with NemoClaw, Omniverse NuRec, and Isaac Sim 📹 YouTube: https://lnkd.in/eUDTRgzc 📍 AddEvent: https://lnkd.in/eesZAzyZ From code to real-world autonomy. . . Let's go! 🙌

    • No alternative text description for this image
  • View organization page for NVIDIA AI

    1,701,803 followers

    Thinking about your next NVIDIA Certification in 2026? Join our certification experts for a global webinar on April 30 to get an overview of refreshed and upcoming certifications in data science, physical AI, OpenUSD, and AI infrastructure. You’ll also get prep resources and practical guidance to help you succeed, including sample questions, study tips, and a live Q&A with the experts. 👉 Save your seat and start planning your 2026 certification goals. 🔗 https://nvda.ws/4lOKLZo

    • No alternative text description for this image
  • View organization page for NVIDIA AI

    1,701,803 followers

    Want to build a long‑running AI agent on Nemotron open models using OpenClaw? Recently announced at NVIDIA GTC San Jose, NemoClaw (in alpha) is an open source reference stack that launches autonomous agents built with OpenClaw inside OpenShell’s sandboxed environment—so with one command you can operate always‑on, self‑evolving assistants more safely. OpenShell (also in alpha) is the runtime and policy layer that adds enterprise‑grade privacy and security guardrails. In this Nemotron Labs livestream, you can: - Watch a step‑by‑step install of NemoClaw on DGX Spark and see what it takes to get a claw running end‑to‑end. - Learn how NemoClaw uses open models like NVIDIA Nemotron together with the NVIDIA OpenShell runtime to provide a safer environment for executing claws. - Get a clear explanation of how NemoClaw, OpenShell, and OpenClaw fit together so you know where each piece runs and how to start experimenting confidently. Join the stream, bring your installation and deployment questions, and come ready with a DGX Spark if you have one so you can follow along and run through the installation live alongside us.

    Build a Claw: NVIDIA NemoClaw on DGX Spark | Nemotron Labs

    Build a Claw: NVIDIA NemoClaw on DGX Spark | Nemotron Labs

    www.linkedin.com

  • View organization page for NVIDIA AI

    1,701,803 followers

    Cohere is defining the future of sovereign AI. Autumn Moulder, VP of Engineering at Cohere, breaks down the requirements for a full AI sovereign stack at #NVIDIAGTC: ✅ Full-Stack Sovereignty: Deploying everything from the model to applications and reasoning traces within a single data center. ✅ The Power of Openness: Why open models like NVIDIA Nemotron are critical for data lineage and navigating evolving regulations.

  • View organization page for NVIDIA AI

    1,701,803 followers

    Our Nemotron Nano 12B v2 VL brings video understanding on-prem. MediaPerf benchmark launched by Coactive AI ranks our 12B model on par with 30B-size models at less than half the footprint: ✅ Cost Efficiency: Lowest cost for Tagging Refinement workload. ✅ Pro-Grade Quality: 0.299 F1 on real-world media tasks. ✅ Massive Throughput: 4.48 hrs video/hr - 15% faster than the leading 30B OS alternative. ✅ Sovereignty: Self-hostable, open model for every developer worldwide. ✅ Transparency: Open training datasets, techniques, and libraries. 🔗 https://mediaperf.org/

    • No alternative text description for this image
  • View organization page for NVIDIA AI

    1,701,803 followers

    👀 LangChain is leveling up agentic workflows. Victor Moreira breaks down 2 essential tools for improving performance and reliability with Chris "The Wiz 🪄" Alexiuk. ✅Deep Agent Harness to manage complex, long-duration tasks and boost LLM performance. ✅LangSmith for tracing agents in production and allowing for continuous improvement. Catch more interviews from our #NVIDIAGTC developer livestream: https://lnkd.in/etHkVba7

  • View organization page for NVIDIA AI

    1,701,803 followers

    Spotted: 🐚 NVIDIA OpenShell is now available in early preview in OpenClaw v2026.3.22 🦞

  • View organization page for NVIDIA AI

    1,701,803 followers

    Curious how NVIDIA’s recently released open model, Nemotron 3 Super, was built—and how to run serious agentic workloads on top of it? Nemotron 3 Super is an open, hybrid Mamba‑Transformer MoE model with a 120B‑total/12B‑active design, 1M‑token context, latent MoE, and multi‑token prediction, built for high‑throughput, long‑context reasoning in multi‑agent systems. This livestream is your chance to talk directly with the NVIDIA AI researchers behind Super and learn what’s new. In this Nemotron Labs livestream, you can ask about: - How the hybrid Mamba‑Transformer + latent MoE backbone works in practice, and what it means for throughput, memory efficiency, and long‑range reasoning. - How the 1M‑token context window and multi‑environment RL alignment (via NeMo Gym and NeMo RL) help with multi‑document reasoning, long‑running agent memory, and tool‑using workflows. - How native NVFP4 pretraining and the open training pipeline—weights, datasets, and recipes—let you customize and deploy Super efficiently on your own infrastructure. - How to get Nemotron 3 Super running quickly using the public deployment and fine‑tuning cookbooks for vLLM, SGLang, TensorRT‑LLM, and NeMo‑based LoRA/SFT/RLVR workflows. Don’t miss your chance to learn from NVIDIA AI research experts. Join the stream, bring your multi‑agent and long‑context challenges, and come ready with questions about deploying Nemotron 3 Super everywhere from cloud endpoints to DGX Spark.

    Ask the Experts: Meet Nemotron 3 Super AI Researchers | Nemotron Labs

    Ask the Experts: Meet Nemotron 3 Super AI Researchers | Nemotron Labs

    www.linkedin.com

Affiliated pages

Similar pages