The New Real Observatory Platform

The New Real Observatory platform is a machine learning tool built with and for artists.

It allows for multi sensory exploration of possible futures as co-constructed experiences of human-generated datasets and artificial intelligence-driven interpolation.

The New Real Observatory offers a deeply personal encounter with the environment beyond human scale. Interactive art, sound, movement and play are prompted by global climate data and science. The architecture of the New Real Observatory platform integrates localized climate modeling forecasts with machine learning processing engines. Cutting-edge science and internationally renowned artists have generated astonishing, immersive digital experiences fueled by Artificial Intelligence.

  • The New Real Observatory platform is an Experiential AI system, specifically developed to allow testing novel engagement strategies and resources to make ML/AI intelligible to non-expert users and the public. This platform enables the exploration of Machine Learning models through conceptual translation between human cognition and algorithmic analysis of data. This allows for whimsical, yet profound artistic explorations, depicting potential futures, crafted through the collaboration of human-generated datasets and artificial intelligence-driven interpolation.

  • The operation of the platform consists firstly of the training and fine-tuning stage. Selected algorithms (TransferGAN and Word2Vec) are trained on provided datasets to generate their inherent “model” - also called the latent space. Then in the exploration stage, an additional layer of AI mapping tools allows users to generate a curated conceptual dimension within the developed model, by translating between the human interpretation of data and the algorithmic one.

    A critical addition to the exploration stage is the platform's additional environmental data streams suite, based on three core parameters from Copernicus Climate Data Service - air temperature, precipitation and wind speed. These are searchable by GPS coordinates and by time (until 2100). Hence, these parameters can be used to explore the conceptual dimension as a function of environmental change through time. This is exposing the drifting nature of our conceptual understanding as a function of our environment as well as the drastic nature of climate change effects and its criticality.

THE PLATFORM

We aim to give artists a level of control and access that is not available to users of the generation of AI tools and explicitly bring climate and environmental consciousness within the tools on the platform itself.

The platform was developed in two stages, in early 2022 we launched the image generation part of the platform, based on transferGAN. This pre-dated the launch of both Dall-E and Midjourney tools. We commissioned five internationally leading artists to co-shape platform development and produced two artworks. These were shown at the Edinburgh Science Festival 2022, Ars Electronica 2022 and at the inaugural The New Real Salon at Edinburgh Futures Institute.

Then, in late 2022, coinciding with the first public release of Chat-GTP, we launched the word-processing part of the platform based on Word2Vec. We also announced an open call for artist responses to this new feature, as part of the Uncanny Machines art commission with the Scottish AI Alliance. Five development awards were on offer to explore the new capabilities, as well as one full artwork commission, to be shown in 2024.

The video explains the outline of the platform, focusing on the transferGAN capability. You can find out more details about both features in the drop-downs below.

  • For the visual processing engine, the transferGAN algorithm is pre-trained on a standard image database. The exploratory dimension itself is constructed through fine-tuning by inputting two sets of curated images, Class A and Class B, which describe two different aspects of a conceptual dimension (e.g. wet-dry or positive-negative).

    Sets of up to 100 MB of varied images are zipped and uploaded to launch the fine-tuning process. In the background, the images are unpacked and downsampled to uniform 128 by 128 pixel tiles and given to the pre-trained transferGAN to interrogate, map and compress onto a linear latent space vector connecting the two classes.

    This “dimension vector” can be explored using our SLIDER tool, where users can generate images every 100th of the distance between the endpoints of the dimension vector.

  • First, we use a Word2vec algorithm to learn word associations from any corpus of text, which constructs a latent representation where each word is represented by a multi-dimensional vector. We have a pre-trained dataset, but users can upload their own as well.

    In the second step, users are invited to generate their conceptual dimension. These are queried as a string of words of your selection, either from a small textual corpus or a series or keywords describing evolution of a thought or some natural or artificial quality (think of a sequence such as: desert, arid, dry, humid, wet, flooded, water, sea). The platform will locate these words within the latent space (as long as they are present in your corpus) and then map the shortest path between them all - thus outlining your ‘conceptual dimension’.

    Then, using our SLIDER tool you can interrogate this conceptual dimension. You can explore its ordering (the fact that the words in the latent space may be ordered in a different way than envisaged by you!) as well as by adding another probing word and examining its relationship (distance) to that dimension. You can also use 'probe words' to explore and generate new associations, i.e. move along the dimension if you were to look for neighbouring words in the same direction. The dashboard will display a visualisation that flattens the latent space to 2D, with the slider represented as a straight line.

Artists using Word2Vec part of the platform (2023)

A film on five artists/teams who envision new horizons for human-machine creativity, help us navigate the profound challenges of our times, and explore their own creative agency when developing or using AI. Featuring Kasia Molga, Alice Bucknell, Linnea Langfjord Kristensen and Kevin Walker, Sarah Ciston, and Johann Diedrick and Amina Abbas-Nazari.

This is part of the Uncanny Machines commission, in partnership between The New Real at University of Edinburgh, Scottish AI Alliance, Alan Turing Institute and British Library.

You can learn more about their projects and journeys below.


Artists using transferGAN part of the platform (2022)

In this video, artists Inés Cámara Leret, Kizzie Macneill and Lex Fefegha talk about how they are using the transferGAN functionality of The New Real Observatory platform to explore the local meaning and relevance of global climate information.

You can see their work with the platform below.


Due to limited capacity of the pilot deployment, access to the platform is limited at this time. If you are interested in using the platform, please contact us.

We also hope to publish the source code soon.