This is part 1 of the speech I gave at RAEEUCCI-2023 (2nd International Conference On Recent Advances In Electrical, Electronics, Ubiquitous Communication & Computational Intelligence) conference. In this speech, I described the aspect of constructing an AI server.
The subtitle is generated by Whisper which I must say is very impressive to me. I am not a native English speaker, thus I made a lot of stupid grammar errors in this speech. However, Whisper manager to identify my speech into subtitle and also correct my stupid grammar errors. I'm very pleased with its performance.
This speech includes:
01:34 Budget V.S. AI server
02:51 Flappy Bird web based AI training program
04:24 What do I need to setup an AI server ?
06:06 With GPU or without GPU
07:13 AI server examples
08:57 Buying hardware is the simplest part
09:13 GPU performance comparison
10:21 AI Framework introduction
14:00 Using Google Trends to choose an AI Framewok
14:59 Coding environment
15:13 AI server construction concept
15:53 Embedded AI platform discussion
17:32 Jetson nano
18:23 Jetson TX2
18:36 Programming Language selection
19:03 Check Flappy Bird training result so far
20:06 Real AI server demonstration
21:38 Storage server demonstration
...
https://www.youtube.com/watch?v=NNkgxJjKk9c
This is part 2 of the speech I gave at RAEEUCCI-2023 (2nd International Conference On Recent Advances In Electrical, Electronics, Ubiquitous Communication & Computational Intelligence) conference in SRM university, India. In this part, I'll demonstrate how to use the AI server I built to transfer part 1 speech into English subtitle. By the way, all the subtitle in this channel was converted using Whisper.
The content of shell script whisperE.sh is:
whisper --language English --model large-v2 --output_format srt --device cuda $1
The subtitle of this video is generated by this script. I than modified the subtitle by hand.
...
https://www.youtube.com/watch?v=86WNmhapg4w