360 VR in the Classroom


The Emerging Technologies Lab in collaboration with Kellogg School of Management had the opportunity to record a 360 video tour of a thriving and growing business as part of a classroom case study. We were able to record inside a Patel Brothers grocery store and create a VR tour throughout the space, showcasing its variety of produce, spices, and other specialty foods.

We hope this can lead to further classroom experimentation and exploration for immersive learning! 

Creative Futures of Generative AI

In September of 2022, nearly two years after its launch, OpenAI made their flagship AI image generation platform Dall-E available to the public. Alongside competitors such as Midjourney and Stable Diffusion, Dall-E can generate original images in an endless amount of styles and genres with just a one-line text prompt fed to it by the user. With public discourse already primed by justified concern over deepfakes, artists, writers, and other creatives baulked at the implications: why commission an artwork when you can just ask Dall-E to make it for you with the taps of a few keys?

As an art historian, this pairing of technological innovation and public morality crisis was familiar to me. It happened with the printing press in the 15th century, and again with photography in the 19th. Who would buy a portrait when you could capture the exact likeness of a person with the technology of a camera? With generative AI, however, this historical precedent was complicated by the technology’s own inherent ability to produce original works with minimal human intervention. Whereas a camera requires lighting, focus, staging, and compositional direction from the artist in order to produce a photo, all Dall-E requires is a sentence to produce wildly variable images. What does this expanded field of creative technologies mean for arts practice of the future?

Enter Helicon, Northwestern University’s on-campus literary magazine. Established nearly 45 years ago by three poetry students, Helicon is a storied institution of literary and artistic merit representing some of the University’s best creatives. I reached out to the magazine’s editor, Lily Glaubinger, with a proposal: bring the editorial board to the Emerging Technologies Lab for a writing workshop, and let’s see what original works can come from experimenting with this newly accessible technology. How would our writers use Dall-E2? What challenges would they face? How would generative AI influence their writing practice? Our contributors— Ann Gaither, Lily Glaubinger, Natalie Jarrett, Skye Tarshis, and Alivia Wynn— were eager to explore the implications of AI image generation for their own creative practices.

Lily Glaubinger

On a Saturday afternoon in January, Helicon’s editorial board arrived at the Emerging Technologies Lab to use Dall-E2 for the first time. We began with a short discussion of the state of generative AI in the field, a demo of how Dall-E2’s interface worked, and tips on how to tailor a prompt for the best results. We also discussed some of the major pitfalls and ethical concerns of generative AI, ranging from its tendency to depict femme-coded bodies in overtly sexualized ways, to its statistical likelihood of defaulting to white and lighter skin tones. By discussing such obstacles at the beginning of the writing workshop, we flagged such context as critical to our writers’ understanding of the tool at hand. Though generative AI can produce endless variations of images, text, sound, and even video at the simple request of a well-worded prompt, those outcomes are inflected by the biases and blind spots of the coders who build the structures that inform how a generative AI makes decisions. By knowing this from the start, we hoped to encourage our writers to think critically and adaptably about how they used the tool over the course of the workshop.

Ann Gaither

The resulting text-and-image pieces, now on display at the front of the Main Library, are as beautiful as they are strange, uncanny as they are familiar, and humorous as they are disturbing. Each writer took a different approach to their use of Dall-E2; some specified clear descriptions with defined aesthetic parameters, while others fed their original prose into Dall-E2’s text box and let chance play out in ones and zeros. Some embraced the absurdity of the resulting images, others made hypotheses about how to produce certain types of images and experimented accordingly. In their exit interviews, almost every writer remarked on both the usefulness of the tool for the ideating process of creative endeavors, and their surprise at the limits of Dall-E2’s scope. The future of generative AI in creative practice and industry continues to take shape; it is our imperative to make sure that shape forms responsibly and ethically.

Photogrammetry of a Shiny Object

Photogrammetry is the process of taking multiple images of an object, and processing the images with a computer, in order to create a textured 3D model of the object. These 3D models can then be used to create new experiences on the Web, in Augmented Reality & Virtual Reality, or printed in 3D as a physical object.

This blog post is technical in nature, and goes in-depth on the efforts put into a recent project.

Image of the trophy from the camera used to do the photogrammetry

Image of the trophy from the camera used to do the photogrammetry

Behind the scenes photo of the trophy on vinyl background

Zoran and I tried to do a photogrammetry scan of this Coffee Maker/trophy. It is a highly reflective surface, and has many curves as well, so we knew this would be challenging. Zoran tried photographing it in the 32″x32″ light box, but the trophy was very large, and there were unflattering reflections resulting from the white and silver sides of the light box, but also from the hole in the front, as the trophy reflects the camera as well.

We then tried setting it up on the white vinyl on a table top, holding the vinyl up with a C stand arm. The white was nice for behind the trophy, giving it nice contrast, but because it was a small piece of vinyl, it created an unflattering reflection in the bottom of the cup, similar to the reflection above. For lighting, we used two Litepanels Gemini 2×1, one with a softbox, and one without, to see if there was any significant difference. We couldn’t detect one.

The black vinyl had a similar problem, so we tried putting the trophy directly on the floor, using a 6×6 black flag that we hoped would be less reflective than the vinyl. We hung the vinyl behind the trophy.

This kind of worked, but the text on the trophy was really hard to read, so we put a white card on the floor, and that helped make the text more readable, but the white card was too small, only making some of the text more readable, despite being about a 3’x5′ board.

Wider behind the scenes shot of the trophy, showing lighting, camera and backdrops.

To scan a one-of-a-kind artifact like this, as-is, requires a complicated setup.

  • One option would be to construct a larger white lightbox, and light it from the outside through diffused white fabric, and obscuring the lens when it peers through the fabric.
    • Something that could help would be a two way mirror in front of the lens to block the reflection of the lens, though this might result in seeing the trophy instead.
    • We could also be more creative in our lighting, perhaps lighting it directly from above, and using the tent/lightbox to illuminate it from the sides by reflection off the fabric.
    • One thing to consider is that the smaller the diameter of the “tube”, the smaller the reflection, but at some point the reflection is less of an object, and more just a specular highlight. It may be difficult to get an even lighting over the whole object because of this.
    • We may be able to build a larger lightbox with our newly acquired 6×6 frames and rags, but we have not tried. We could suspend the light source over the lightbox using the Menace arm kit. This would take up a lot more space in the studio than the current lightbox does, however.
  • Another option would be to get the top parts of the trophy looking good, perhaps using a polarizing filter on either the camera, lights, or both. [see the sketchfab article below] Then photograph it from all angles, then light it so the middle looks good, then photograph it from all angles, and then light it so the bottom looks good, then photograph it from all angles. Then the three images for * each* camera and turntable position would need to be combined into one image, and then those composite images sent to the photogrammetry software for processing. This would require very precise camera and lighting positioning.
  • If we would be able to alter the artifact, we could spray it with one of these sprays:

Look Your Best Recording Remotely

Prior to the pandemic, our team often recorded on-location or in our studio, using our own cameras and lighting equipment. In...
Read More

2019 Leopold Lecture with Rep. Adam Schiff

Addressing a full audience last night in Cahn Auditorium, California Rep. Adam Schiff, Chairman of the House Intelligence Committee, delivered this...
Read More

Class of 2023 and Transfers Aerial Photo

Welcome Class of 2023 and Transfer Students! Last Friday we were delighted to join about 2000 wildcats to create this year’s...
Read More

Robotics Filming in Tech

Last week, we filmed in the new Robotics and Biosystems Center space in the new AB Fill in Tech, interviewing faculty...
Read More

Media & Technology Innovation at Commencement 2019

Northwestern hosted its 161st annual Commencement this past weekend. Working with the Commencement Office and TC Furlong, Northwestern IT provided a...
Read More

Promotional Videos for Two Northwestern Centers

The video production team has had a very busy spring. It began with our own version of March Madness – creating...
Read More

TEACHx’19 – Ruha Benjamin, SingerSavvyApp, and Arabic Manuscripts

The Academic Software Development team presented two digital poster sessions at the TEACHx’19 conference this week. TEACHx is presented by Northwestern...
Read More

Midwest Drupal Camp Chicago 2019

Two weeks ago, the Northwestern IT S&S Academic Software Development team – Paul Eisen, Jorge Lopez, and Gerard Panganiban – attended...
Read More

Launching of Issue #155 of TriQuarterly

Our first launch of 2019! The Academic Software Development team launched the latest issue of the TriQuarterly Magazine, issue #115. Our new...
Read More