Mary Jane's pet gallery (Little Margie Productions 20©23)
After 'experimenting' for a year, it was a big kick to finally see what Eki and I had hashed out over. "Mary Jane's pet gallery" is 45', and has a point of view and a punch line. Eki was a brick to give me a first-level tutorial in the process.
The year went by - ChatGPT and Midjorney changed our blog. The illustrations are on a whole other level. But it was a jolt when it wrote posts. When the New Yorker and the Economist used the same AI apps as Eki I was over the moon.
All the way through the process I had no idea what the video would look like. The photos we took of the animals and the voiceover were done the old-fashioned way - on a smartphone.
AI's infant stage is amazing. But when it gets to be a teenager and grows up will we love it, or will all hell break loose. In the meantime it's a runaway boffo: ChatGPT has over 100 million users. It was fun to jump at the beginning with our AI video. Check it out.
Sources: Eki, Annie Lavigne (photos), Augustin du St Remy, and Maggy (voiceover)
Next week: GenZ (1997-2012) LUDDITES switch OFF
Note: First of all, i'll just paste the Youtube blurb here...
Mary Jane's pet gallery is an experiment in AI-assisted video production. All of the source material for this video was created using artificial intelligence. The gallery, all the paintings, and the starring pets were created using text-to-image AIs, namely Midjourney, Dall·E 2, and Stable Diffusion. The animation is a combination of traditional techniques and warping using AI-generated depth maps. The facial animation for the pig was created using thin-plate-spline-motion AI model, transferring the motion from a video from Maggy's live performance, recorded with a cell phone. The audio for the dialogue was also recorded simultaneously by phone, then processed with Adobe Podcast AI. The music is an AI creation, made with AIVA. Directed by Maggy Fellman & Eki Halkka Edited By Eki Halkka
Mary Jane's pet gallery is an experiment in AI-assisted video production. All of the source material for this video was created using artificial intelligence. The gallery, all the paintings, and the starring pets were created using text-to-image AIs, namely Midjourney, Dall·E 2, and Stable Diffusion. The animation is a combination of traditional techniques and warping using AI-generated depth maps. The facial animation for the pig was created using thin-plate-spline-motion AI model, transferring the motion from a video from Maggy's live performance, recorded with a cell phone. The audio for the dialogue was also recorded simultaneously by phone, then processed with Adobe Podcast AI. The music is an AI creation, made with AIVA. Directed by Maggy Fellman & Eki Halkka Edited By Eki Halkka
...and that pretty much tells most of the story. In addition to the year's worth of posts here explaining and documenting the process, of course.
This most definitely was a case where the journey was more important than the goal - though gotta say, I'm really happy with how our little experiment turned out in the end. I would have dived into AI stuff regardless, but working on this project was a much better learning experience than just randomly dabbling around would have been - there was a method to the madness, an actual goal towards which to push the learning curve - all in a safe setting where a failure would not have been the end of the world (unlike in, err, so-called "real paid projects", where every work-hour needs to add real value to the client, or they'll find someone else).
Overall... really glad we did it. Kudos to Maggy for being Naggy, and pushing me forward.
CU
Overall... really glad we did it. Kudos to Maggy for being Naggy, and pushing me forward.
CU
--
Eki