
1. Image and Video 2. Reconstruction and Renovation 3. Research

4. Avatars
 5. Scripts and Narration
 6. Audio, Music and Dialogue

MYTHOS works across six core areas
1. Image and Video

We use a blend of Film, CGI Animation and Generative AI to support the creation and enhancement of visual material including:

Image generation based on historical research

Visual interpretation of collections and archives

Image restoration and enhancement

CGI and AI-assisted video creation and editing

Accessible visual content for exhibitions and online use
Note. All imagery is clearly contextualised and where appropriate labelled as CGI orAI-assisted or reconstructed
Case study: BAKEHOUSE CLOSE
A History Society wanted to realise the Bakehouse Close, Edinburgh, as a period late19th century moving postcard -
Generative AI image and text prompting was used extensively with supplied photographs based on illustrations, 19th century period details and atmospheric 4k digital photography.
> The final result is as a realistic, highly detailed street video with overcast sky and wet streets in the style of Jane Stewart Smith’s 1870 original illustration.




2. Reconstruction & Renovation
We create Generative AI-assisted reconstructions to help audiences imagine the past, including:
Reconstructed buildings, interiors and landscapes

Visualisations of lost or damaged objects
Interpretive reconstructions of historical scenes
Multiple-version reconstructions to reflect uncertainty or debate
Reconstructions are always:

Evidence-based

Clearly distinguished from original material

Developed in collaboration with subject experts
Case study: THE KIESS TUNIC
A Research Project wanted to show how a tunic made of historic cloth found in a Scottish Highland peatbog would appear when new -
We collaborated to sample the cloth, use Generative AI to apply a period tartan check texture to an archive image of the actual 18th century tunic. 

> The final result is a reconstructed tunic as appeared when first made.



3. Research

We use Generative AI as a research support tool NOT a replacement for scholarship. This includes:

Analysing large text, image or audio collections

Supporting literature and archival research
Identifying patterns, gaps or connections in datasets
Summarising and structuring complex material
Note. All findings are reviewed by human researchers and presented with appropriate
caveats and references.
Case study: THE OMEN RELIC
An Education Project wanted to show how an imagined found scrap of historic cloth in the Scottish Highlands led to myths surrounding the Battle of Flodden -
We offered an arts student a placement to create a sample cloth. CGI was used to add period tartan-like patterns and simulate centuries of aging in an anaerobic peatbog. Researched text and data around the cloth was added to contextualise the item in history.
> The final result is an imagined Wikipedia page that highlights both the fragility of historical objects and the stories they carry forward.



4. Avatars

We design AI-powered avatars and digital characters for interpretation and engagement, such as:

Historical figures for exhibitions or education

Composite or fictional characters based on research

Guided digital interpreters for visitors

Accessible interfaces for answering visitor questions
We carefully manage risks related to:

MisrepresentationBias and stereotyping

Identity, consent and likeness

Audience understanding of what is simulated
Case study: WELCOME AVATAR
A Museum sample of a welcome avatar showing a warm and topical look and feel -
Several topic key text prompts were used in Generative AI tools to initially create a satisfactory still, then a moving clip.
> The final result was a realistic, highly detailed Scots Highland styled speech avatar for use on social media.

5. Scripts and Narration

We support written interpretive content using Generative AI, including:

Exhibition scripts

Film and video narration
Educational content
Brand marketing and engagement copy

Multilingual versions of existing text
AI-generated drafts are always treated as ‘starting points’, refined by writers, curators and educators to ensure tone, accuracy and sensitivity.
Case study: LADY SINCLAIR (16th Century)
An animated film highlighting the aftermath of war in 1513 on Lady Sinclair and the devastating effects in endlessly reliving the loss of 300 clansmen and her beloved William -
Researchers analysed the British Newspaper Archive reports on Caithness and the Battle of Flodden, then used Generative AI to merge them into a single first-person narrative. After human editing and the addition of age-specific Caithness dialect.

> The final result became a dialogue as an early 16th-century Caithness noble woman.





6. Audio, Music and Dialogue

We use Generative AI to support sound-based storytelling and access, including:
Spoken narration and dialogue

Audio guides and soundscapes

Music composition for interpretation and installations

Transcription and translation of audio material

Voice generation for accessibility
We prioritise transparency around synthetic voices and music, and avoid replacing human performers where this would undermine artistic or community value.
Case study: MANTIQ-UT-TAYR (Persian poem)
A Research Study on language and voice wanted to realise how an accent from of dialect that does not survive today might sound -
Specific voice generative tools were used in sampling and training in Persian dialects to reimagine twelfth century voices in north-eastern Iran.
> The final result was a passage from the poem text used to create a musical track.





