Leaving DeepPlants — pursuing R&D in embodied AI
As of March 2026 I have left DeepPlants after co-founding and leading R&D there since September 2021. I am grateful for the experience of building production-grade, agentic AI systems for micro-farming and agri-tech, and for the team and partners who made it possible.
My goal is to continue in research and development in embodied AI. I am now in a deliberate transition toward roles where I can do exactly that: high-impact, production-grade intelligent systems.
Current focus:
- Deep technical consolidation in multimodal foundation models — including Vision-Language and Vision-Language-Action architectures — large-scale Transformer systems, and probabilistic modeling for sequential decision processes.
- Structured analysis of research trends in embodied AI and generative modeling, combined with hands-on prototyping to assess architectural trade-offs, scalability constraints, and deployment feasibility.
Key focus areas: system-level design of multimodal AI pipelines; evaluation methodologies for large-scale models; integration of learning-based components into decision-making systems; bridging research prototypes with production-oriented constraints.
Objective: Continue R&D in embodied AI—contributing at a strategic level to teams building next-generation AI platforms and intelligent embodied systems.
If you are working on such problems and open to collaboration or roles in this space, feel free to reach out.