Inference Systems Engineer (LLM Serving Runtime + Performance) / Global Environment / Hybrid Work
Inference Systems Engineer (LLM Serving Runtime + Performance) / Global Environment / Hybrid Work
Kanagawa
Fulltime
Remote
Famous Start-up
Own Products/Services
Global Business
Many Foreign Employees
Industry
Data Center Services
IT Skills
Python, C++
Working hours
Salary
8 Million yen〜14 Million yen
Job Description
About the Product
The product is building a next-generation AI compute platform in Japan, optimized for high-density accelerated systems and modern AI workloads. The company operates a heterogeneous environment across multiple accelerator architectures and evolving server platforms. The company's goal is to deliver reliable, predictable performance for production model serving and other AI workloads—at scale, under real traffic, with strong operational discipline.
Performance in model serving is not a single knob. It is an end-to-end systems problem that spans request scheduling, batching behavior, memory/KV-cache management, kernel/runtime efficiency, and production observability. The company treats serving infrastructure as a first-class system: engineered for throughput and tail latency, built for correctness, and designed to be operable.
Role overview
As an Inference Systems Engineer, you will own the serving runtime that powers production LLM inference. This is a deeply technical role focused on system performance and stability: optimizing request lifecycle behavior, streaming correctness, batching/scheduling strategy, cache and memory behavior, and runtime execution efficiency. You will ship changes that improve TTFT, p95/p99 latency, throughput, and cost efficiency—while preserving correctness and reliability under multi-tenant load.
You will collaborate closely with platform/infrastructure operations, networking, and API/control-plane teams to ensure the serving system behaves predictably in production and can be debugged quickly when incidents occur. This role is for engineers who can reason about the entire inference pipeline, validate improvements with rigorous measurement, and operate with production-grade discipline.
Responsibilities
●Own the end-to-end serving runtime behavior: request lifecycle, streaming semantics, cancellation, retries interaction, timeouts, and consistent failure modes.
●Design and implement batching and scheduling strategy: dynamic batching, admission control, fairness under mixed tenants, priority lanes, and backpressure mechanisms to
prevent cascading failures.
●Optimize performance at the systems level: reduce time-to-first-token, improve tail latency stability, increase tokens/sec throughput, and improve accelerator utilization under realistic workloads.
●Improve memory behavior and cache efficiency: KV-cache policies, fragmentation control, eviction strategies, and safeguards against OOM cliffs and performance thrash.
●Drive runtime execution optimizations: operator-level improvements, quantization integration, compilation/tuning paths where appropriate, and parameterization that produces stable performance across deployments.
●Establish a performance measurement discipline: reproducible benchmarks, realistic traffic traces, profiling workflows, regression detection gates, and dashboards tied to production outcomes.
●Build production readiness into the system: feature-flagged rollouts, canarying, safe configuration changes, and incident playbooks that reduce MTTR.
●Partner with networking and infrastructure operations to align deployment topology, failure domains, and capacity constraints to performance and reliability goals.
●Collaborate with product and API teams to ensure the serving layer’s guarantees are reflected accurately in external interfaces and customer expectations.
Required Skills
Requirements
5+ years building high-performance systems (model serving, GPU systems, performance engineering, or low-latency distributed systems).
●Strong understanding of LLM inference tradeoffs: batching vs latency, prefill vs decode dynamics, cache behavior, memory pressure, and tail latency causes.
●Comfort working across Python/C++ stacks with production profiling and debugging tools.
●Track record of shipping performance improvements that hold up under production variance and operational constraints.
●Strong engineering hygiene: tests, instrumentation, documentation, and careful rollout discipline.
Required Language Skills
Japanese Level
Business Level
English Level
Business Level
Other Language Skills
Our Bilingual Career Consultants Will Provide Full-support for Your Job Transferring.
Wacky Japanese Resumes.
Worries related to visa sponsorship.
Ways to Handle salary and other negotiations.
Annoying Interview Scheduling and Other Paper Works Related.
Tips for Passing the Interviews.
※Results for 2022
Our Bilingual Career Consultants Will Provide Full-support for Your Job Transferring.
Wacky Japanese Resumes.
Worries related to visa sponsorship.
Ways to Handle salary and other negotiations.
Annoying Interview Scheduling and Other Paper Works Related.