Luma AI is a startup working on building multimodal AI systems that can see, understand, and interact with the world.
Their mission is to create an intelligent creative partner that can help expand human imagination and capabilities.
They are developing multimodal foundation models that can process both vision and language. This is a critical step beyond current language models.
Their goal is to create AI systems that can imagine, show, and explain things, not just tell. This will enable new kinds of intelligent creative assistants.
Luma has released an iPhone app that uses AI to easily capture 3D scenes, objects, and products. The app creates lifelike 3D without needing special equipment.
They have also launched a video-to-3D API that can generate interactive 3D models from video walkthroughs for $1 per capture. This makes 3D modeling much faster and cheaper.
Luma's 3D scenes and models can be embedded anywhere on the web and shared across platforms. They are designed to be efficient, shareable, and commercially usable.
In June 2024, Luma released "Dream Machine", an AI model that can generate high-quality, consistent videos from text and images. It shows impressive temporal consistency and understanding of motion dynamics.
So in summary, Luma AI is at the forefront of developing multimodal AI with applications in 3D capture, modeling, and video generation. Their goal is to create intelligent creative tools that expand human imagination and capabilities.