NVIDIA® Jetson™ TX2 series modules give you exceptional speed and power efficiency in an embedded AI computing device. Each supercomputer-on-a-module brings true AI computing to the edge with an NVIDIA Pascal™ GPU, up to 8 GB of memory and 59.7 GB/s of memory bandwidth, and a wide range of standard hardware interfaces.
Jetson TX2 is a 7.5-watt supercomputer on a module that brings true AI computing at the edge. It's built around an NVIDIA Pascal™-family GPU and loaded with 8 GB of memory and 59.7 GB/s of memory bandwidth. It features a variety of standard hardware interfaces that make it easy to integrate into a wide range of products and form factors.
Features
SIze
MINIMIZE YOUR FOOTPRINT
Now you can get exceptionally high compute, accuracy, and power efficiency in a module the size of a credit card. Its small 50 mm x 87 mm size enables real deep learning applications in small form-factor products like drones and more.
Performance
MAXIMIZE YOUR PERFORMANCE
Experience more than double the performance or twice the energy efficiency of Jetson TX1. It’s all made possible by Jetson TX2’s 256-core NVIDIA Pascal architecture and 8 GB memory for the fastest compute and inference.
Power
OPTIMIZE YOUR POWER EFFICIENCY
With Jetson TX2, you can now run large, deep neural networks for higher accuracy on edge devices. At just 7.5 watts, it delivers 25X more energy efficiency than a state-of-the-art desktop-class CPU. This makes it ideal for real-time processing in applications where bandwidth and latency can be an issue. These include factory robots, commercial drones, enterprise collaboration devices, intelligent cameras for smart cities.
A JETSON TX2 FOR ANY APPLICATION
The extended Jetson TX2 family of embedded modules provides up to 2.5X the performance of Jetson Nano in as little as 7.5 W. Jetson TX2 NX offers pin and form-factor compatibility with Jetson Nano, while Jetson TX2, TX2 4GB, and TX2i all share the original Jetson TX2 form-factor. The rugged Jetson TX2i is ideal for settings including industrial robots and medical equipment.
Specifications
AI Performance | 1.33 TFLOPS | |
GPU | NVIDIA Pascal™ architecture with 256 NVIDIA CUDA cores 1.3 TFLOPS (FP16) |
|
CPU | Dual-core NVIDIA Denver 2 64-bit CPU and quad-core Arm® Cortex®-A57 MPCore processor complex | |
Memory | 8 GB 128-bit LPDDR4 59.7 GB/s |
|
Power | 7.5W | 15W | |
PCIe | 1 x4 + 1 x1 OR 2 x1 + 1 x2 PCIe Gen 2, total 50GT/s |
|
CSI Camera | Up to 6 cameras (12 via virtual channels) 12 lanes MIPI CSI-2 (3x4 or 6x2), D-PHY 1.2 (up to 30 Gbps) |
|
Video Encode | 1x 4K60 | 3x 4K30 | 4x 1080p60 | 8x 1080p30 (H.265) 1x 4K60 | 3x 4K30 | 7x 1080p60 | 14x 1080p30 (H.264) |
|
Video Decode | 2x 4K60 | 4x 4K30 | 7x 1080p60 | 14x 1080p30 (H.265 & H.264) | |
Display | 2 multi-mode DP 1.2/eDP 1.4/HDMI 2.0 2 x4 DSI (1.5Gbps/lane) |
|
Networking | Wi-Fi onboard 10/100/1000 BASE-T Ethernet |
|
Mechanical | 87 mm x 50 mm 400-pin connector Thermal Transfer Plate (TTP) |