ARtificial

Reprogramming human-machine creativity through Artificial Intelligence and Augmented Reality

Reprogramming human-machine

creativity through Artificial

Intelligence and Augmented Reality

ARtificial is an augmented reality application I designed and developed while studying at the Royal College of Art, and that is now available on the App Store. It bridges generative AI with physical space, offering users a new way to experience and question machine-generated visuals. Initially exhibited at Dubai Design Week 2023, the app continues to evolve, most recently with an upgraded interface and AI model for faster, more refined results.

Click here to see it on the App Store

Client:

Self directed project

Role:

Product designer & developer

Skills:

UX/UI design · AR prototyping · AI API integration · Creative coding · Research · Unity

Collaborators:

Supported by The Royal College of Art

Problem:

Generative AI tools often dominate the creative process, leaving the human as a passive viewer or prompter. This imbalance limits critical engagement and can reinforce opaque power dynamics between user and algorithm.

So how can users remain active agents in posthuman creative practice?

An AI generated image of "AI taking over human creativity by leaving the human as a passive prompter"

  • Explore the full story ⋅

  • Explore the full story ⋅

  • Explore the full story ⋅

Research

My research was sparked by an interest in how technology shapes our visual culture. Computers don't see or interpret images the same way humans do, yet algorithms are in charge of deciding what images we are exposed to.


Having researched theoretical aspects surrounding technology that were thought of before the arrival of AI, I asked myself how it all applied to generative imagery. My goal quickly became trying to understand the user’s role in machine-mediated creativity. 


Interviews and informal testing with early adopters of generative art tools such as Dall-E and Midjourney revealed common frustrations. Although users were amazed at the speed and quality of the outputs, they often felt detached from them, with little sense of ownership or contextual control.

Key insights:


• Users want more authorship and agency over the content they create, especially when it’s AI assisted


• The environment and medium in which we see images shapes how they are perceived


• Real-time responsiveness is essential to creative flow

Solution

ARtificial invites users to generate AI artworks via text prompt and see them materialise in real space using augmented reality. The interface is intentionally minimal, allowing the user’s environment and creative prompt to take centre stage. The app creates floating “digital paintings” that exist contextually, not abstractly.


The project was achieved with Unity, which allows for quick iterative development and testing. When the "snap" button is pressed, a rectangle is placed in augmented reality thanks to surface and distance tracking. The snapped image is sent to the AI with whatever prompt was written in the input field at the bottom. The rectangle texture is then updated with the image modified by the AI. I used Replicate AI for the image generation, as they offer an open-source API, which I connected my app to via HTTP requests using C#. I developed for iOS only, as using Xcode and ARkit proved to be easier, and with a more universal userbase for mobile.


From the beginning, I was guided by a belief that technology should enhance human expression, and not replace it. Drawing on critical theory (Parisi, Bridle, Braidotti, Paglen), the app was designed to enhance the user’s decisions, context, and environment. This meant treating generative AI as a collaborator instead of an author, framing the visual output within the user’s world.

Key features:


• Real-time image generation with natural language prompts


• Integration with AI model via an API HTTP request


• Intuitive mobile interface designed for easy to use AR viewing


• Fully public on the App Store for iOS mobile devices

A video tutorial showing the ARtificial app in action

Version 2

In 2024, I made an update to the app, driven by insights gathered from user feedback and testing. One main finding emerged: although the app appeals to a broad audience, it resonated most strongly with users under 15, who engaged with it as a form of immersive, escapist creativity.


In response, I redesigned the interface to improve usability and reduce cognitive load, focusing on refining the visual language. I also upgraded the AI backend to a newer, faster Stable Diffusion model to enhance output quality and generation speed.


A major addition in this update was a strength slider: a control that allows users to adjust how closely the AI output resembles the original image they took. This feature addressed a common frustration: outputs that distorted the input image to the point of being unrecognisable. The slider sends a value between 0 and 1 via the HTTP request, giving users more control over the balance between originality and fidelity in the generated image.

New key features:


• Updated API that sends HTTP requests to a faster, more reliable diffusion model that generates higher quality outputs


• An added slider that determines a value between 0 and 1, allowing the user to determine the "strength" of the AI's modifications


• A slicker user interface with easily accessible instructions for use and information panel via a dropdown menu

Impact:

The app was launched on the App Store in 2023 and selected by the Royal College of Art to represent innovative design at Dubai Design Week. Since then, it has gathered over 200 downloads and frequent users.

It continues to evolve, encouraging users to reframe their relationship with AI, placing themselves at the centre of the creative loop.

Create a free website with Framer, the website builder loved by startups, designers and agencies.