Skip to content

Designing an Inclusive Audio Guide Part 3: Talking Tech with Ruben Niculcea

Two men are engaging in conversation, seated at a brown paper cover table in an art studio. The ma on the left has an open notebook on the table in from of him, and the man on the right is holding an iPhone.

Ruben Tudorel Niculcea, lead software engineer at Carnegie Museums of Pittsburgh Innovation Studio, talks with Sina Bahram, Prime Access Consulting, Inc., during a user testing sessions.

This is the third post in a series about the development process of The Warhol’s new audio guide. 

To develop our inclusive audio guide, we’ve partnered with the fantastic Carnegie Museum of Pittsburgh Innovation Studio. I sat down with Ruben Tudorel Niculcea, lead software engineer at the Innovation Studio, to talk about the technology behind the iOS application.

Desi Gonzalez: What is the Innovation Studio?

Ruben Niculcea: The Innovation Studio is a post-digital design, research, and development lab for the Carnegie Museums of Pittsburgh. We ask, how can museums be 21st-century cultural institutions, and how can we showcase the breadth of content that we have available? Museums hold so much knowledge, history, and culture, and we use technology to present this in a way that’s relevant, engaging, and meets people where they are.

DG: Tell me a little bit about yourself and the role you played on this audio guide project.

RN: I’m a lead software engineer at the Innovation Studio, and I was the lead developer on the project. I’ve made all of the architectural and technical decisions associated with the delivery of this project. I help set development deadlines and make sure they’re met and that we’re delivering a product based on The Warhol’s needs. Sam Ticknor, a junior developer in the Innovation Studio, has been working alongside me to develop the app.

DG: Let’s get a bit technical. Can you describe the technology underlying the audio guide?

RN: This is a React Native application with a Redux architecture. It’s an emerging technology in the mobile app space, and I made that leap for a number of reasons. I feel that React Native is finally stable enough to be used in production, and it allows for rapid mobile development that is currently not available in traditional ways of developing mobile applications, such as Objective-C with a model-view-controller architecture. For example, we’ve redesigned the audio player for the app three or four different times, at times with completely different visual design, animations, and functionality. Because we went with React Native and a Redux architecture, we could rapidly prototype these different components without having to fundamentally change large parts of the application. Another great thing about React Native is that, if we want to port this to Android further down the line, 70% to 80% of the work will have already been done.

DG: A central goal of the project is to design the audio guide for users with visual impairments. Is this the first time you’ve developed an app for this audience?

RN: Yes, it is. At my previous job, we made sure that we included a baseline for accessibility. That means we had support for VoiceOver—the screen reader technology built into all iOS devices that uses a combination of different gestures to translate the visual design into speech—as a layer on top of what we built. But as for thinking about how a user would interact with an application with VoiceOver from the start, this is the first time I’ve done that. It’s completely different to design with VoiceOver in mind.

DG: In addition to VoiceOver, we’re also using bluetooth low-energy beacons to push out media to users. What has your experience working with beacons been like?

RN: Previous to this, I shipped a somewhat similar solution for the Carnegie Museums of Art and Natural History where, when you enter certain rooms, you trigger a notification highlighting certain content. This app works a little bit differently, in that we’re detecting when you’re in within a particular region in the range of a certain object—a more accurate way of determining proximity.

Beacons are good for detecting when you’re in the proximity of something, but if you’re trying to figure out someone’s exact location, then beacons are actually a pretty crude solution to that problem. We’re still working to make the beacons function better.

DG: What’s a feature on the audio guide app that you are most excited about?

RN: With this audio guide, we’ve done things a bit differently: instead of having one longer audio file related to each work of art or theme, we’ve split our audio into smaller, modular chunks, so that you might have three to five audio “chapters” for each “story.” We’re planning on dynamically rearranging the order in which you listen to these chapters, which is something I’m most excited about.

Each chapter is assigned a category—such as historical context, artistic process, or archival material—and we will be able to rearrange the order in which you’ll listen to them depending on your preference. If you have VoiceOver enabled, we will move the visual description audio file to the top of the list, so that visually impaired audiences will get a sense of an artwork before diving into deeper content about it. And for all users, we’ll be able to tailor a path based on your interests. Let’s say you really like historical context—whether it’s a clip about the house Warhol grew up in, or a discussion about consumer culture in the 1950s—and you listen to the historical context every time but skip other kinds of content. The app will be able to learn what you like and place that category higher up on the list.

DG: What has been one of your biggest learnings throughout the process?

RN: The thing that’s really struck me is that designing with accessibility in mind—what’s often referred to as “universal design” or “inclusive design”—has not only been learning a new area within software development—namely, designing for screen readers—but also has forced me to simplify my designs in a way that is beneficial to everyone. It’s made me realize that, if an interaction is too hard for someone who is visually impaired, then it’s something that could be improved for everyone. The final design of our audio player is a great example of this. We slowly simplified the audio player and how the controls are arranged until it they were consistent across all screens and most states.


A screen shot of the "College Years" story from the new Out Loud app. The page includes a thumbnail of the piece, details about the piece and a "Chapters" section with several audio clips. There are audio controls like previous, rewind 5 seconds, pause, speed, and forward near the bottom of the screen. Below them is a tool bar with the options "Near me," "Stories," and "The Warhol."
The most recent layout for the audio guide.


DG: Where can I find the code?

RN: At the Innovation Studio and The Warhol, we are committed to open source, and sharing our knowledge and technology with other museums. We plan on open-sourcing the app as soon as we launch version 1 of the app, and it will be available on The Warhol’s Github account. We’ll be releasing it with an MIT license, which means that anyone is free to use, copy, modify, publish, and distribute the code however they please.

Accessibility initiatives at The Warhol are generously supported by Allegheny Regional Asset District, The Edith L. Trees Charitable Trust, and the FISA Foundation in honor of Dr. Mary Margaret Kimmel.

Out Loud, The Warhol’s new audio guide, will be available at the museum fall 2016.