Job Description
Description • Own the full lifecycle of cutting-edge computer-vision algorithms that power VIDAS, Compound Eye’s flagship 3D perception stack. You will design, prototype, optimize, and ship code that turns ordinary automotive-grade camera feeds into dense, semantic 3D maps—at 30 fps on an embedded GPU—giving vehicles and robots the situational awareness they need without LiDAR or radar. • Architect and implement real-time perception modules in modern C++17/20. You will write lock-free data pipelines, leverage SIMD intrinsics, and profile CUDA kernels to squeeze every millisecond out of ARM and NVIDIA Jetson-class hardware. Your code will run in passenger vehicles, agricultural robots, surgical navigation systems, and defense platforms, so correctness, safety, and determinism are non-negotiable. • Push the state of the art in multi-view geometry, visual-inertial odometry, and neural radiance fields. You will read SIGGRAPH, CVPR, and ICRA papers on Monday, experiment in Jupyter on Tuesday, and land an optimized, regression-tested implementation in our main branch by Friday. Expect to publish when appropriate and file patents when you break new ground. • Build robust evaluation frameworks that stress-test algorithms across sun glare, rain, motion blur, and sensor failure. You will mine petabytes of real-world driving and robotics data to identify corner cases, design synthetic augmentation strategies, and automate metrics that quantify drift, latency, and semantic accuracy. • Collaborate daily with a cross-functional team of 14 engineers spread across three U.S. time zones. You will review PRs in GitHub, debate coordinate-frame conventions in Slack huddles, and occasionally jump on a customer call to explain why a monocular depth network outperformed stereo in a dusty cornfield. • Mentor and recruit the next generation of vision engineers. You will conduct technical interviews, run onboarding bootcamps, and pair-program with junior teammates to level-up their C++ and linear-algebra intuition. As we scale, you will have the option to lead a small strike team focused on SLAM, calibration, or neural rendering. • Contribute to safety-critical certification efforts (ISO 26262, DO-178C). You will document algorithms in LaTeX, trace requirements in Jira, and participate in failure-mode reviews so our perception stack meets automotive and aerospace standards. • Influence product roadmap by translating customer pain points into technical deliverables. When an ag-tech partner needs centimeter-level elevation maps for autonomous tractors, you will scope the effort, propose sensor configurations, and deliver a working demo in six weeks. • Maintain a culture of transparency and scientific rigor. You will present weekly “tech deep dives,” open-source non-core tooling, and uphold a blameless post-mortem process where every bug is a learning opportunity. Requirements • Mastery of C++11/14/17 in production environments—templates, move semantics, and RAII should feel like second nature • Solid grasp of linear algebra, multivariate calculus, and probability as applied to 3D geometry and filtering • Demonstrated experience with at least one of: real-time SLAM, structure-from-motion, visual-inertial odometry, or neural 3D reconstruction • Nice-to-have: CUDA or OpenCL kernel development, exposure to safety-critical codebases, and a track record of peer-reviewed publications or patents ️ Benefits • Remote-first culture with a $2,000 home-office stipend, top-tier laptop, and optional co-working membership • Comprehensive medical, dental, and vision coverage for you and dependents—100% of premiums paid by Compound Eye • 401(k) with 4% company match, immediate vesting, and access to ESG and crypto index funds • Discretionary PTO (minimum 20 days) plus company-wide shutdowns the last week of December • Annual learning budget of $3,500 for conferences, courses, and books Apply tot his job
Apply tot his job
Apply To this Job