Peter recommended us to please update the structuring so that happy and sad auditive stimulations would alter. This may positively influence the continuance of contingency factors (e.g., time of the day, weather, brightness). Hence, we rewrote our script and submitted it.
We are assuming to get a response by Monday next week. In the meanwhile Sebastian set up his infrastructure on Google Colab and started frame and sound extraction from the sample videos.
Today, Josephine will present her code, and Sebastian will try to replicate this later in Python. So excited!