
I appeared as a guest on the AVNation podcast, AVweek. It was episode 666, , recorded May 24, 2024. I will wait here if you want to listen to it, the link is https://www.avnation.tv/podcast/avweek-666-the-magic-of-networked-av/. There were three topics we talked about; one-how SMPTE 2110 was used at Eurovision, two -the addition of Artificial Intelligence (AI) to Microsoft Teams through a partnership with Logiech, and three-the current state of digital signage. Yes, we all made comments about how the episode topics are timeless.
While talking about the AI integration into online meetings, we touched on a few points about the use of AI. Indicating that items have been created by AI is an important one. I mentioned that AI is helping with the Advisist website. I will be indicating what is created by AI, it is currently graphics and excerpts. AI is not a solution if used improperly.
Being the inquisitive person I am, I decided to do some experiments. I asked three different AI image generating tools to generate an image. They all received the same prompt, “a technical looking drawing with the colors black, white, and green. The drawing is used as a featured image for an article about artificial intelligence”. To test how summaries and transcriptions worked, I ran the podcast episode through a few providers.
I believe that the more communicative and easily understood way to illustrate is to provide an image. I entered the prompt “a technical looking drawing with the colors black, white, and green. The drawing is used as a featured image for an article about artificial intelligence” into Adobe Photoshops Firefly, Adobe Illustrator Firefly, and Midjourney. Using two different Adobe products was done to compare the output from two different design environments. The quantity of images output by fault is three (3) for Adobe and four (4) for Midjourney. I selected the first three images in Midjourney.
These are the nine images that were created. If you want to know which one was created by which software, hover your mouse over the image and wait a moment for the title to appear. In case you are curious, I used a random sequence generator from random.org to generate the order.









There are definitely stylistic differences and approaches. These approaches seem to be driven by the engine used. One of the things that I did not test, is that Adobe Firefly evaluates the other content in the documents to try and match the style when it creates a new item. Not starting with a blank canvas or adding more descriptive text will definitely change the results. You can try these out at https://www.adobe.com/products/firefly.html and https://www.midjourney.com/explore?tab=hot
The visual illustration hopefully helps to indicate the differences between engines and environments. We can interpolate from those results that the same thing would happen when interpreting what is happening in a meeting. As I said, I ran the recording of AV Episode 666 through transcription engines. It was not quite equal, I did not use Copilot as the free version did not provide a way to create summaries. I used Adobe Acrobat’s engine and Otter.ai’s engines. Click on the headers if you want to download the raw transcript to compare. Otter also generated action items and an outline, both were good starting points. After listening to the podcast, would you use the summaries below?
Otter AI
The speakers discussed the latest developments in AV technology, including the implementation of 2110 workflow in live broadcasts and the potential of AV over IP solutions. They also explored the use of AI in meetings and digital signage, emphasizing the importance of contextual cues and interactivity to enhance customer experience. The speakers highlighted the benefits of tailoring digital signage to the viewer’s preferences and using it to keep customers from comparing products on their phones.
Adobe AI
The document titled “The Magic of Networked AV” discusses the importance of digital signage in the AV industry, the need for interactive and personalized experiences, the potential of AV over IP technology, and the convergence of technologies. It also mentions the successful implementation of AV over IP at Eurovision and its future applications. The document highlights the benefits of AI in various aspects of AV, such as vocabulary learning, meeting scheduling, content creation, and visual cues in video conferencing. It also emphasizes the role of digital signage in providing personalized shopping experiences and targeted advertising.
To leave you with food for thought or discussion, do you think it is a good thing is AI tries to predict the way someone will answer a question?
Otter has the ability, and I am sure others do as well, to make predictions of how each person would answer a question.. I found the feature in the AI chat window by accident, but decided to experiment. I asked the AI the question, “What guidelines would _____________ recommend for reviewing AI-generated content to ensure it provides value without replacing human expertise?”
- Bradford would likely recommend that companies establish clear guidelines for reviewing AI-generated content to ensure it provides value without replacing human expertise. He may suggest designating certain roles or individuals to review all AI output before publication or implementation. Bradford would also stress the importance of fact-checking AI conclusions and not solely relying on its recommendations without human oversight. He would likely advise training staff to critically evaluate AI results and understand where its knowledge may be limited. Above all, Bradford would emphasize that AI should aim to enhance human work rather than replace it.
- Based on her comments, Dawn would likely recommend taking a cautious approach to reviewing AI-generated content. She may stress the need to carefully fact-check any information from AI systems against reliable sources, since garbage in can lead to garbage out. Dawn would also emphasize having subject matter experts review content to catch any inaccuracies or issues related to context. Given her background, she would probably advise putting strict controls on AI systems to prevent sensitive data from being exposed. Overall, Dawn seems wary of relying too heavily on AI and would want human expertise and oversight to remain central to the review and validation of any AI-generated outputs.
- Based on his comments, Jeremy would likely focus on the user experience when recommending guidelines for reviewing AI-generated content. He would stress the importance of testing content with real users to evaluate how well it engages and informs the intended audience. Jeremy would also suggest gaining user feedback to identify any gaps or inaccuracies in the AI’s understanding. He may propose a collaborative review process involving subject matter experts, clients/customers and AI specialists to get different perspectives. And Jeremy would likely advise regularly improving the AI model based on review findings to enhance its performance over time. Above all, he would want the review process to demonstrate how AI can add value to human expertise, rather than replace it.
If you will excuse me, I need to go discuss with the AI refrigerator what we are having for dinner…
Thank you for reading.
PS I used WordPress’s AI to create the excerpt.
