I set up streaming audio with ffmpeg and aws

I'm hype about this. First step to becoming Spotify.

I'm excited about this. I've been thinking of way to promote music for launches, and I thought - what if we could allow fans to give us their emails, and in exchange we link them a private stream of the song, as well as a heartfelt note of thanks for being a fan.

Configuring Amazon Web Services & Chunking the Audio

To get this to work, I followed this wonderfully instructional YouTube video to get my Amazon S3 and Cloudfront services set up. The purpose of S3 is to serve as web storage - as a place to put the files, with Cloudfront used to expose the files to the internet.

At this point, I can put an audio file in an S3 bucket, but it's just the file. Nothing has been chunked for streaming. It looks like I could use AWS Media Convert but couldn't get it to work, so I reached for the command line tool ffmpeg.

I then used ffmpeg to convert my .wav file into a bunch of chunked .acc files and a single instructional .m3u8 file. This is so that we don't share an entire audio file, but instead stream audio at 10 second interval chunks to the client.

`ffmpeg -i input.wav -c:a aac -b:a 192k -vn -hls_time 10 -hls_playlist_type vod -hls_segment_filename "segment_%03d.aac" output.m3u8`

Once those are in the S3 bucket, and I updated all the CORs permissions to allow other websites to query the endpoint, I was almost done!

I don't know the specifics, but basically we can now point and <audio/> element at our .m3u8 on Cloudfront. The page will make the request and then somehow understand to start streaming in the rest of the files.

Updating my website

Rendering the audio player in the blog is a little tricky for two reasons. The first is that there's no native markdown that I'm aware of that is natively rendered to the audio player, and that I need to polyfill support for hls, which I understand as the thing that allows us to stream in this audio.

so I installed hls.js and ChatGPT wrote this Audio component for me

1import React, { useEffect, useRef } from "react"; 2import Hls from "hls.js"; 3 4interface Props { 5 src: string; 6} 7 8const HLSAudioPlayer = ({ src }: Props) => { 9 const audioRef = useRef<HTMLMediaElement>(null); 10 11 useEffect(() => { 12 if (Hls.isSupported() && audioRef.current) { 13 const hls = new Hls(); 14 hls.loadSource(src); 15 hls.attachMedia(audioRef.current); 16 17 hls.on(Hls.Events.MANIFEST_PARSED, function () { 18 audioRef.current?.play(); 19 }); 20 } else if (audioRef.current?.canPlayType("application/vnd.apple.mpegurl")) { 21 audioRef.current.src = src; 22 audioRef.current.addEventListener("loadedmetadata", function () { 23 audioRef.current?.play(); 24 }); 25 } 26 }, [src]); 27 28 return <audio ref={audioRef} controls></audio>; 29}; 30 31export default HLSAudioPlayer; 32
tsx

Extending the pattern introduced here, I updated the link value to render the new HLSAudioPlayer when the file href ends with .m3u8

1const renderers: { [nodeType: string]: RendererFunction } = { 2 a: ({ href = "", children }): React.ReactElement => { 3 if (href.endsWith(".m3u8")) { 4 return <HLSAudioPlayer src={href} />; 5 } 6 return renderLink({ href, children }); 7 }, 8 ... 9}

I share this in honor of the late Nicholas Hazel, who commissioned this song from me for the intro to his podcast series. There was one episode, but I'm still forever honored. He was a fellow Software engineer, a kind and creative thinker. I think he'd be honored to know it was my "hello world" here.