Yesterday, I had the opportunity to attend the xR in EDU Conference hosted at SRI International in Menlo Park, CA.
It was very inspiring to hear from both Industry Innovators as well as leading educators in the field. I’ve been trying to wrap my head around how to best support teachers in integrating Virtual Reality, Augmented Reality, and mixed reality in their classrooms, and appreciated the perspectives that both industry and educational folks shared. It was great to get both sides of the table in one place to begin the conversation!
Check out my Google Photos Album here to check out some key moments as well as some presentation slides.
Team Magenta at Google has been exploring a lot of interesting mediated generative data through it’s AI division. Recently WIRED published an article about some sound explorations going on at Google. NSynth feeds a “massive database of sounds” into a neural network, and generates never before heard sounds that fuse qualities of sound with data of other sounds to make some interesting audio. From the Google Magenta website:
“Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”
Check out Wired’s SoundCloud playlist to hear examples (in the linked article above).
And then there is Deep Dream, google’s dreaming AI art bot.
Ah the future is fun!