Introducing Lyn
The video-based super-agential multimodal ecosystem.
Lyn is the first open-source foundational video model and super-agential multimodal ecosystem that was designed to deliver an open dream machine to the world. Lyn’s flagship open-source model, Openlyn, is reshaping the capabilities of video artificial intelligence in terms of output duration, interact-ability, generation latency, and hyper-realistic visual quality. Lyn's video creation engine and platform, Everworld, is revolutionizing how human beings engage with photorealistic virtual representations of themselves through personalized and interactive video agents capable of performing any task. Together, Openlyn and Everworld will profoundly alter what humans can achieve online, taking over tasks and activities that individuals alone either prefer not to do or are unable to do.
Lyn was founded by a team of leading experts in generative AI, including professors from the University of Toronto (UofT), Oxford University, Cornell University, Stanford University, University of Central Florida (UCF), Hong Kong University of Science and Technology (HKUST), Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), and Peking University (PKU), as well as former leaders from Meta, Deepmind, Microsoft, Google, Tencent, and Amazon, and current PhD students in generative AI. The Lyn team consists of prominent figures in AI research who have played a pivotal role in some of the industry’s most significant innovations, including Google's foundational video model, Video Poet, Facebook’s foundational video model, Make-a-Video, Tencent's foundational video model, Meta Movie Gen's most cited benchmark model, Seeing and Hearing, the industry's top open-source video model 'Open-Sora-Plan', and OpenAI’s first real-time end-to-end humanoid agent video model, Body of Her.
In the following pages, we present our vision for video agents called Lyn's, photorealistic representations of human beings with multimodal agential capabilities, running on a decentralized protocol powered by a vast network of Agent APIs (AAPIs). We expect Lyn's to replace the majority of human activity online in the coming years. In this book, we begin with a detailed description of our foundational video model underpinning our network and our research in autoregressive modeling, tokenization, MLLM hallucination reduction, and data pipelining on the video AI side, paired with our technological advancements in hyper-contextual data modules, personalized data feed partitioning (DataLink), hybrid in-cloud on-device model deployment, and Agent API protocol routing (CommandFlow). We progress to a discussion of our plans for mass adoption and the future of our decentralized agential platform, along with an exploration of the economy that will drive the Lyn ecosystem.
Last updated