NVIDIA logo

Machine Learning Applications and Compiler Engineer, LPX β€” New College Grad 2026

NVIDIA
May 07, 2026
Full-time
Remote friendly (Toronto, Ontario, Canada)
Canada
$135,000 - $220,000 CAD yearly
EDA Jobs, Level - Entry or Early Career

Job Title

Machine Learning Applications and Compiler Engineer, LPX β€” New College Grad 2026

Role Summary

Develop algorithms and optimizations for NVIDIA's LPX inference and compiler stack, working at the intersection of compilers, large-scale systems, and deep learning. The role focuses on mapping neural network workloads to NVIDIA platforms and improving end-to-end inference performance.

Experience Level

Entry-level (New college graduate, 2026).

Responsibilities

Primary responsibilities include designing, implementing, and evaluating compiler and runtime features to optimize inference workloads.

  • Build, develop, and maintain high-performance runtime and compiler components for inference optimization.
  • Define and implement mappings of large-scale inference workloads onto NVIDIA systems.
  • Integrate with the software ecosystem: libraries, tooling, and deployment interfaces.
  • Benchmark, profile, and monitor performance and efficiency metrics for compiler-generated mappings.
  • Collaborate with hardware architects to provide software feedback and codesign performance features.
  • Prototype and evaluate compilation and runtime techniques (graph transforms, scheduling, memory/layout optimizations).
  • Publish or present technical work at relevant ML, compiler, or architecture venues.

Requirements

Must-have technical skills and experience. Education requirements are listed separately below.

Must-have:

  • Strong software engineering skills in systems-level programming (C/C++ and/or Rust) and CS fundamentals (data structures, algorithms, concurrency).
  • Hands-on experience with compiler or runtime development (IR design, optimization passes, code generation).
  • Experience with LLVM and/or MLIR (building custom passes, dialects, or integrations).
  • Familiarity with deep learning frameworks (TensorFlow, PyTorch) and portable graph formats (ONNX).
  • Understanding of parallel and heterogeneous compute architectures (GPUs, spatial accelerators, domain-specific processors).
  • Experience using profiling, tracing, and benchmarking tools to diagnose and improve performance.
  • Strong communication and collaboration skills across hardware, systems, and software teams.

Nice-to-have:

  • Experience with MLIR-based compilers or multilevel IR stacks for graph-based deep learning workloads.
  • Prior work on spatial or dataflow architectures, static scheduling, or pipeline/tensor parallelism.
  • Contributions to open-source ML frameworks, compilers, or runtimes.
  • Research publications or presentations at conferences such as PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, or NeurIPS.

Education Requirements

Pursuing or recently completed an M.S. or Ph.D. in Computer Science, Electrical/Computer Engineering, or a related technical field, or equivalent practical experience.


About the Company

Company: NVIDIA

Headquarters: Santa Clara, California, USA

NVIDIA is a global leader in accelerated computing, renowned for its innovative solutions in AI and digital twins that transform diverse industries. The company specializes in networking technologies, providing end-to-end InfiniBand and Ethernet solutions for servers and storage that optimize performance and scalability. NVIDIA serves sectors such as high-performance computing, enterprise data centers, and cloud computing, constantly reinventing its products and services to stay ahead in the market.

NVIDIA logo

Date Posted: 2026-05-05