The LLM-Automated Metadata feature is designed to automatically populate metadata fields based on a prompt and contextual content, streamlining the publishing process and improving consistency.
How It Works
This feature integrates seamlessly into Wildmoka’s existing architecture. While the most visible changes appear in the publish panel, its implications span across destinations, user profiles, and templates.
1. Destinations
Custom Destinations: Each metadata field can have its own dedicated prompt.
Pre-configured Destinations: A prompt is defined for each metadata field (not editable)
2. User Profiles
In the Destination tab of a user profile:
Each metadata field includes:
An AI checkbox to enable LLM automation for that specific field.
A Prompt input to specify the instruction sent to the LLM.
3. Templates
Templates follow the same structure as user profiles:
Each field has an AI checkbox and a prompt input.
Template settings override user profile and destination-level settings in the following hierarchy:
Template > Profile > Destination
This layered approach allows for flexible control strategies, such as:
Preventing AI usage for sensitive fields (e.g. policy for human-edited titles or descriptions).
Defining field-level prompts at the destination level when inputs do not depend on users or templates (e.g. content age-appropriateness).
Prompt Context
Currently, the LLM uses the subtitles of the clip to generate metadata.
Prompt Engineering
To achieve the best outcomes, prompts should follow structured writing principles—such as the CRAFT framework for LLM prompting.
LLMs can be used to populate any field type, including:
Open text
Dropdowns
Radio buttons
Checkboxes
Date pickers
Prerequisites
The feature needs to be enabled in tenant settings
The source content must have subtitles.
Apply LLM-Metadata to overlays (clips & thumbnails)
It is possible to auto-complete text overlay in video and thumbnail by using jinja variables that are themselves auto-completed with AI prompt.