For this month’s sponsored Plex video, I examined the process of integrating the Plex API with AI coding assistants like Claude and Google Gemini. The primary objective was to determine whether natural language prompts could generate functional applications to control and analyze data on my local Plex media server.
The development setup was relatively straightforward. I accessed the Plex Media Server API documentation and downloaded the OpenAPI specification, which resulted in a single JSON file. After placing this file in a dedicated local directory, I instructed Claude’s coding application to reference it for API structure.
I tested this approach with Claude Code, ChatGPT’s Codex, and Gemini’s command-line interface on a Mac. All three tools successfully read the JSON file, interpreted the API requirements, and edited the application files directly on my local machine. Since these applications were designed to run locally, standard authentication was bypassed in favor of a Plex token. This token can be retrieved by viewing the XML data of any media item within the Plex web interface and extracting the character string from the resulting URL. You can see how to do that in the video.

The initial test was a swipe-based media selection tool. I requested an interface that presented random movie recommendations, where swiping right would immediately trigger playback on an Android TV client. Claude generated the core functionality on the first attempt, requiring only minor debugging to ensure the player execution command operated correctly. By default, the coding tools tended to write the web applications in NodeJS. However, to utilize an existing web server on a Synology NAS, I instructed the AI to rewrite a subsequent project in PHP.

This PHP project resulted in a jukebox-style application designed for multiple users on a local network to add songs to a Plexamp que. By scanning a QR code, users access a client screen on their mobile devices where they can search my server’s music library and submit song requests. As the administrator, I monitor the queue from an admin interface and have the ability to reorder the requested tracks, shifting specific songs up or down the playback list before they route through Plexamp.

Subsequent experiments focused on data retrieval and display. I directed the AI to build a statistics dashboard that analyzed my viewing habits over the past year. By programming the app to filter out content consumed by my children, it generated a localized report on my specific media consumption patterns and active viewing days.

A final application served as a digital “Now Playing” marquee. It queries the server to display the current media’s thumbnail and a progress bar, while simultaneously pulling a list of similar titles from the library. Clicking any of the recommended titles halts the current video and initiates playback of the new selection.
My initial experiments suggest the barrier to entry for developing customized Plex experiences has lowered significantly. Where interacting with a platform’s API once demanded fluency in specific programming languages, I found that natural language processing models now act as a functional bridge between raw documentation and executable code.
Moving forward, the integration of Model Context Protocol (MCP) to instruct the AI on Plex’s API instructions will likely make things more efficient especially for those on constrained token limits with their AI provider. I’ve found Gemini Pro’s command line interface to be pretty generous in its token allocations.
See more of my Plex content here!
