LABS

Sharing our experiments…

ANIMDIFF CHARACTERS

August 27, 2024

ANIMDIFF CHARACTERS | SAM2 WITH IPADAPTERS

For this LABs we have experimented with Meta SAM 2’s capability to mask the subject when they pass occluders. We then apply the mask to the Controlnet and IPAdapter’s attention mask and run it through AnimDiff in ComfyUI.

This pipeline enables AI-assisted storytelling that transforms the subject into generative materials (e.g. noodles, froggy man, clouds, bubbles) that can interact with real world environments.

LIQUID LOGO TRANSITIONS

August 20, 2024

LIQUID LOGO TRANSITIONS | CONTROLNET WITH DREAM MACHINE

Inspired by the multiverse trend in branding, we used Controlnet and Dream Machine’s Keyframe feature to create a series of experiment ident transitions.

We used our T&DA logo as the depth image to ensure it creatively flows across multiple themes. This approach could become an invaluable tool for brands, showcasing how far imagination and #GenAI can stretch to create unique and impactful visual identities

SXSW 2024 AI PHOTOBOOTH

March 28, 2024

SXSW 2024 AI PHOTOBOOTH | AI POWERED PHOTOBOOTH WITH STABLE DIFFUSION

This year at SXSW in Austin, we showcased our AI Photobooth.

Our AI Photobooth transforms people into imaginative styles that dynamically change across the event. It uses a Stable Diffusion backend with LCMs, ControlNets and IPAdapters to capture and transform attendees into exciting multiverses beyond our own.

AI SOUNDSCAPES

December 20, 2023

AI SOUNDSCAPES | GENERATIVE AUDIO WITH GENERATIVE VISUALISERS

In this T&DA exploration, we have harnessed the power of Stable Audio AI to turn text prompts into 3 unique music tracks that drive audio-reactive AI visuals.

We isolated stems from Stable Audio using Moises.ai’s Audio Seperation capability, and then leveraged Spotify’s Audio Intelligence Lab MIDI conversion tool to transform these stems into MIDI tracks. This gave us complete creative control to compose three original soundscapes inspired by the ocean, viruses, and neon space worlds.

We designed 3D audio visualisers in our custom Houdini AI pipeline that generates an audio-reactive depth pass and uses AnimDiff and ControlNet for the AI reskinning.

Listen in!

AI ALBUMS

November 27, 2023

AI ALBUMS | ANIMATING ALBUM COVERS WITH STABLE VIDEO DIFFUSION

In this LABs experiment we have used the recent Stable Video Diffusion to bring album covers to life. Stable Video Diffusion is a foundational model for genAI videos that can be generated from text prompts or images.

AI TRAVELS WITH ANIMDIFF

October 30, 2023

AI TRAVELS WITH ANIMDIFF

In this LABs experiment we have explored the temporal capabilities of AnimDiff in a real world environment.

We used ControlNet to approximate a depth map of the input footage, which we then used to guide AnimDiff alongside various prompts. In comparison to earlier experiments involving img2img: rather than each frame being generated from scratch, AnimDiff enables each frame here to have a context of its previous frame within 16 buffer chunks.

AI MOTION LOGOS

October 11, 2023

AI MOTION LOGOS | HYPNOTIC MOTION LOGOS WITH ANIMDIFF

Here is our latest LABs exploration into generating AI videos with AnimDiff and ControlNet.

AnimDiff has the unique ability to retain temporal cohesion between frames through motion LoRAs. When combined with the ControlNet QRCode Monster model, we are able to direct the optical flow of motion to follow the path of our logo and even make it a seamless loop.

AI LIVE SCANNER

August 8, 2023

AI LIVE SCANNER | REALTIME AI POWERED MIRROR WITH STABLE DIFFUSION

We present our latest experimental installation showcased at Limehouse Creative Circle, a real-time AI powered mirror that turns partygoers into a series of styles like Pixar, Watercolour, Studio Ghibli, Barbie, Tom Cruise and more!

Read more about it on The Stable

 

POCKET DIMENSIONS

August 8, 2023

STEP INTO ANOTHER DIMENSION

Pocket Dimensions is an MR/VR experience where players are able to explore various unique dimensions that can adapt to any space. Pocket Dimensions started out as a way create a caving system, using Procedural Mesh Generation, based on user generated 3 dimension splines.  Players draw rifts through their room. They then open up these rifts and step into a procedurally generated cave system that is guided by their defined pathway. Within this Pocket Dimension, they are able to collect energy by mining crystals.  We intend to add a feature that once the experience is completed, users contribute mining crystals into a communal pot, like crowdsourcing, or hive honey pot, and the community can build towards something together.  Currently, a user can enter dimensions they have explored into a virtual passport system, and based on energy they have collected, use this energy to explore further into the worlds that live within all dark matter.

 

Experience has been developed to be HMD agnostic. Try the experience now for Meta Quest Pro:

https://drive.google.com/file/d/16XOtGtsDryu5ksNRF0dhMBnoPSZFgj5_/view?usp=share_link

SKETCH DIFFUSION

August 8, 2023

SKETCH DIFFUSION | LIVE AR DRAWING WITH STABLE DIFFUSION AND META QUEST PRO

Here is our latest AI exploration with Meta’s Meta Quest Pro where we managed to get GravitySketch running live through Stable Diffusion img2img. As you draw, your sketch will turn into the prompt you specify (with a 3-4 second delay). The new details that Stable Diffusion generates in the output will then shape and inspire the way you continue to develop your creation.

This experience explores a whole new paradigm of human-computer interfacing by reinforcing our symbiotic relationship with neural models. It was made possible with Python and our technical artist’s modded version of InvokeAI, “DiffusionCraft AI” which supports realtime prompt hotswapping.

#ar #wearables #metaquest #oculus #arvr #meta #experience

DAYDREAMS

June 28, 2022

DAYDREAMS

A combination of Wordle, Hangman and Midjourney, Daydreams is the first daily game to use AI synthesised images. In essence, it’s a web-based game with AI image-based clues.

Every new day presents a new ‘daydream’ to bring into reality. All players solve the exact same daydreams each day, making it the perfect way to start your day, readying you for the day’s watercooler conversation. An addictive daily game, Daydreams is suitable for anyone and everyone to play.

Daydreams leverages Midjourney – a powerful CLIP guided diffusion model and service where people enter prompts into the Midjourney Discord bot and images are generated such as ‘disco astronaut painted by Salvador Dali’ or ‘artistic glass shark sculpture with studio lighting, 3D render’. And it really is only limited by people’s imagination.

Each daydream is a Midjourney prompt that players use to guess from the Midjourney generated images – some days are easy, others harder.

Play it here: https://daydreams.ai/

WE ARE THE WAR

June 8, 2022

WE ARE THE WAR | CLIP-GUIDED STORYTELLING & SPEECH SYNTHESIS

We Are The War | CLIP-Guided Storytelling & Speech Synthesis

At T&DA we have explored leveraging the latest technologies to create ‘We Are The War’, an animated graphic novella poem, where both the images and narration were generated through visual and audio synthesis.

Created using prompts inspired by children’s book illustrations and the lines of our poem, images were generated using Midjourney. The narrator was generated through speech synthesis software that only needed 15 minute sample of talking. A bespoke filter was designed and used for motion capture and applying facial animation with Ebsynth.

To add parallax and depth to the scenes we used neural Z depth extraction.

Using these tools we tell the story of children’s metaversal shenanigans as realities of recent years bleed into their locked down lives. We can think of this technology almost as if it’s a 10 year old expressing itself, which felt like the perfect means to convey the sentiments of the poem.

#midjourney #clip #ai #descript #synthesis

DISCORD DUNGEONS

June 8, 2022

DISCORD DUNGEONS | ASCII MAZE GAME

Introducing a new original idea, a first person game that can be played entirely inside Discord!

Created by T&DA’s Developer Lachlan Christophers, users can explore an underground maze privately, or collaborate together with friends.

Inspired by early text adventure games, Discord Dungeons uses embedded messages in a normal discord chat channel, and implements buttons for navigational controls, but only by completing the maze can users discover what awaits them.

Head over to the T&DA Discord server to try it out for yourself!

*once in the server, type either /maze public OR /maze private, in one of the available chats to start a brand new maze game.

ELEPHANTOPIA

March 29, 2022

ELEPHANTOPIA | NFT & BLOCKCHAIN GAMING PROJECT

Elephantopia is the first step into the web3 and NFT world for T&DA. We’re making a game world with the idea of eventually having dozens of different game modes and activities all within the same world.

An initial set of Trunket profile image NFT’s help establish the concept, and garner interest. These NFT’s will become the access key to the 3D game world we’re working on, currently we’ve started on a racing segment but over time there will be many different things to try as more gets built…

SOUL TRADER

July 8, 2021

SOUL TRADER | VIRTUAL TRADING CARD AR FILTER

Soul Trader | Virtual Trading Card AR Filter

For the latest #FacebookHackathon, we have designed a filter that explores a future in which our soul can be captured, shared, and commodified into blockchain trading cards. This is a future where our identity can be exchanged for virtual coins with just the swipe of a finger.

T&DA presents Soul Trader.

Try it on Instagram or Facebook

#arfilters #sparkar #facebook #hackathon

ROLLERPUTT

May 20, 2021

ROLLERPUTT | REALTIME DYNAMICS MEETS CROWD-SOURCED DECISION MAKING

Let’s get this ball rolling!

Rollerputt is a collaborative live gameplay event hosted on the #Facebook platform and designed in #UnrealEngine. This idea has potential for very high engagement through social channels.

Players roll a virtual marble through a hand-crafted obstacle course, collecting coins and knocking down walls to reach the final flag, but here’s the catch – they all control the same ball!

Think text adventures meets marble run, players influence the direction of the ball by submitting comments into the Live Chat (such as ‘charge forward!’ or ‘please mr marble, roll right’).

Every 10-30 seconds the commands are tallied, averaged, and normalised, and the marble is fired in that chosen direction.

By utilising Epic’s Unreal Engine we provided high quality visuals and realistic physics for this live experience.

We hope to deliver more #Unreal live games that place the power in the players. We see potential to further develop ideas around this tech, such as pitting fans of competing teams, or groups with polarising opinions, and have them come together to achieve a common goal, or defeat each other and vanquish the enemy!

#gamedev #multiplayer #rollerputt #experience #realtime

TETRIS-U

April 1, 2021

TETRIS-U | FULL BODY TRACKING WITH MACHINE LEARNING

Tetris-U | Full Body Tracking with Machine Learning

By harnessing the power of machine learning to track one’s entire body, T&DA has created an experimental Tetris clone.. but with a twist!

To quote Roger Walters: “all in all you’re just another brick in the wall”.. Introducing Tetris-U! Where the player becomes the Tetris bricks by using a webcam.

The player captures real life poses and converts them into stacking bricks. These poses can include twisting and contorting ones body like a dancer, or as if possessed by unnatural forces, or just creating shapes like a budding mathematician in a desperate search to find the next perfect shape.

You can play this while practicing yoga, dancing or any kind of workout!

Watch the demo video to see some Tetris-U shenanigans.

#machinelearning #tetris #ai #unity3d #mlagents #poseestimation #gameification #gamedev #rnd

ENVIROCUBE

March 24, 2021

ENVIROCUBE | MANIPULATING VIRTUAL ENVIRONMENTS WITH PHYSICAL OBJECTS

EnviroCube | Manipulating Virtual Environments with Physical Objects

You’ve heard of Darkseid’s Motherboxes or Thanos’s Infinity Stones…well T&DA want some power of it’s own so have conjured the EnviroCube!

We have successfully digitised a lolly jar by attaching a #quest controller and recreating it in a virtual environment.

When our virtual camera targets a certain face of the EnviroCube the world transforms into that new environment, be it desert, forest, snowy mountains any many more.

Simply put, each face of the cube can trigger various events to occur.

From art gallery installations to futuristic remotes, to abstract puzzles, this tech has boundless potential waiting to be discovered.

It’s wizardry & these cosmic powers are within your reach by harnessing the power of the EnviroCube!

FACE THE JUNGLE

March 18, 2021

"FACE THE JUNGLE" - REALTIME CINEMATICS CONTROLLED WITH YOUR FACE.

“Face the Jungle” – Realtime cinematics controlled with your face.

Using Epic’s Unreal Engine, and Live Link, our very own Sean Simon created a 360 immersive virtual world, which we can navigate through with nothing more than your facial features.

We can dynamically light a scene by opening our mouths, we generate particles with a raise of an eyebrow…brings a new meaning to, when you’re smiling the whole world lights up with you!!!

This is only one small example, of the true nature of what Epic Games Real Time software can offer us creatives. We will be showing more soon, as we reveal some of the wonderful work we are excited to be a part of.

THE UPSIDE DOWN

March 18, 2021

"THE UPSIDE DOWN" - IMMERSIVE AR USING LIDAR

“The Upside Down” – Immersive AR using LiDAR

Harnessing the real-time capability of Unreal Engine, T&DA has rewritten the rules of reality to manipulate the form, interactivity, and atmosphere of any physical space. We have used the Oculus Quest as a virtual camera in Unreal Engine to explore and interact with a LiDAR scan that is mapped onto the real world.

Understanding the measurements of any physical space has huge potential as it allows us to transform your house or office into dreamscapes, theme parks, space stations, jungles and so much more

OFFICE OVERGROWTH

March 18, 2021

“OFFICE OVERGROWTH” - REALTIME FOREST LAYOUT WITH LIDAR

“Office Overgrowth” – Realtime Forest Layout with LiDAR

In this experiment, T&DA has used Epic’s Unreal Engine and Quixel Megascans to construct a realistic forest environment. Using the power of LiDAR technology, we collected accurate measurements of the T&DA office environment.

Taking this LiDAR scan, we then created our forest environment to the exact proportions and measurements of our office space, overlaying virtual rock formations and trees onto real world objects.

In this way, we have the ability to transform any real world space into any virtual environment. Combining our previous experiment, we can manipulate this environment to be an interactive experience where users can walk around hybrid virtual worlds, blending the lines between real and virtual universes.

FISH FINGERS

March 18, 2021

"FISH FINGERS" | EXPERIMENTATION WITH HAND TRACKING

Here at T&DA we have been experimenting with our #manus gloves as a focal element for environmental storytelling. In this video we show how this real-time tech can be adapted to an underwater scene where we control a swarm of fish at the tip of our fingers.

Swim fishies swim!

 

SHRINK RAY

March 18, 2021

SHRINK RAY

Ever since the words “help me Obi Wan…” were uttered, tech geeks everywhere have wondered how can we leap from our Jetson’s Zoom Calls to communicating via Holograms and 3D real-time representations of ourselves.

In the T&DA offices we have reached closer to the future.
We call this technology Shrink-Ray after our very own Creative Director Raymond. Here you can see that big-Ray is puppeteering a mini-Raymoji ‘Shrink Ray’ in real time and able to communicate as a 3d character via an iPhone camera’s AR.

We have done this by successfully uploading real-time mocap data to the cloud. This means that anyone anywhere around the world can access this data in AR and have a 3D person to interact with live.

Huge potential for performance shows/live events, and the future of communications…