In collaboration with Victor and Aneta, we crafted a solution to our challenges: the creation of a dynamic community comprising streamers and content creators. This community would serve as an invaluable testing ground for our product, with members providing crucial feedback. Moreover, their diverse stream clips would become the fuel for training our AI models, elevating our innovation.
I assumed the responsibility of steering the community, orchestrating the feedback flow, and curating stream clips for AI model training. This role, undertaken in close collaboration with the growth, content, and product teams, ensured a seamless and strategic approach to our mission.
Initial Feedback Report
One of the most important questions were the core features of the app. The chart below shows the percentage of streamers in need for Gameplay detection, Facial expression detection, Voice detection, a hotkey to clip from streams and a PC app instead of mobile.
From all the streamers I interviewed, 60% of them were Competitive game streamers, which explains the need for gameplay detection. Then the remaining 26% and 14% were casual and IRL streamers who wanted more facial and voice detection.
Out of all the clips the AI detected, only 20% of them where good clips. Others where irrelevant.
Insights
Analyzing the data, it became apparent that the technology resonated more with IRL and casual game streamers than their competitive counterparts. This realization prompted a strategic shift in our community outreach approach. The clip rating provoked the need for curating clips for machine learning.
Shift in Outreach Strategy
With the strategic shift in the community outreach approach, I decided to contact handpicked IRL and casual streamers to join our community. In total, i managed to invite 55 streamers. I started with feedback interviews as soon as new members started joining.
Feedback report post outreach
After 30 interviews, I created a report. This time the revelation was interesting, 65% of them wanted facial expression and voice detection. 19% wanted a feature where streamers would say a word like "clip it" and it would create a clip. The remaining 16% wanted facial expression, voice detection and gameplay in addition to it.
The developers required high-quality clips to train the AI model, acknowledging that only 20% of the generated clips were deemed "clip-worthy." To address this, I sat with Aneta and Kevin to plan a manual clip curation process. It involved scanning through the backend data for clips of streamers in our community. We had approximately 250 members at this point, with data on each one of them thanks to the discord bots I configured. I made a list of streamers who streamed on certain days and were of certain genres, certain ethnicities and gender.
For a clip to be good, it must either be funny or trigger emotion within viewers. In order to expand facial expression detection the AI needed faces of different ethnicity and gender. The more diverse the better. videos with different resolutions because not all streams are of the same quality. As for the voice detection, a good mix of male and female streamers' clips were required.
Post-implementation of insights gained from the initial feedback and curated clips, we sought a second wave of feedback. For this one, I contacted all the IRL and casual game streamers, the new and the old ones. I interviewed 80 streamers in total.
80% appreciated the facial expression detection and voice spike detection. 12% wanted gameplay detection and the remaining 8% wanted a manual recording option which is either a hotkey or trigger words said during stream.
The good clip rating went up to 50% as in the streamers liked 5 out of 10 clips generated.