UC1 and Top3 Points From First Year

In the Necoverse UC1 target is to utilize new metaverse technologies and measure effectiveness in vocational education and within Robot training use case.

The first experiment of UC1 robot training pilot was made from two different rounds having first one as group work and second one as induvial learning round. Both sessions were using the same combined approaches of browser-based and VR headset-based environments before moving to actual physical robot classroom.

When we look back to the first year and what was recognized when taking theory to practice, we come up with interesting three findings – partly expected but also realized when observing real users in action during first training pilot experiment.

1. What are the strengths of metaverse environment in vocational training?

We have been looking up obvious question “Is there need to repeat traditional methods in digital world?” while building up digital training environment and how to operate there in vocational education approaches. Having intention to take training beyond expected and come up with concept of digital pedagogy has been driving forces as well.

While virtual environments allow to create immersive experiences and real-time collaboration it can also take training situations to safe environment where trainee can practice also dangerous work situations with confidence.  Another way to benefit virtual environment compared to traditional online simulators is the ability to experience the physical movement of the robot. In Robot use case operations such as point calibration can be viewed freely and you can move or teleport yourself to any viewing angle to observe the virtual twin and to see what position the robot is taking – this is closest to the feel of a real robot. This all improves proficiency and hands-on experience when starting with new task.

Clearly recognized benefits are also possibility to repeat any part of training as many times as needed and scaling availability due virtual instances. Use of metaverse training environment is not limited to physical availability of space nor accessibility to the locations. Cross-platform support gives opportunity for easy scaling without need of special devices improving equality and lifelong learning.

Metaverse approaches clearly extend opportunities to make also abroad participants to get training or why not also to open training and education export business. There is still need to understand how different access devices will give most of potential withing different phases of training situation and we are further evaluating role of headset, desktop and mobile devices while running next training pilots.

2. Is metaverse technology targeted for group or individual learning approach?

In here the purpose of our experiment was to test whether the browser-based combined with VR headset-based to operate robot DigitalTwin is more efficient (or works better) in a group learning environment or in an individual learning environment.

What we found was that it is inefficient in the group environment because the amount of time that they had in VR was not enough for them to complete the tasks. When you compared it with the individual learners, the individual learners were more inclined to finish all the tasks or at least many more of them, got to finish the tasks in the VR station. The reason why it was inefficient is because of the “chaos” that happens in the group situation – many people were trying to operate the robot simultaneously. We must remember that there are multiple panels operating a single robot, so people were overlapping their control of the robot and overriding what some people had done already.

So, what tended to happen is that people in the same group were either waiting for somebody else to finis, or one person would just take charge of it and finish the tasks while the others were watching.

The advantage, however, that did happen in the group situation was that there was some peer-to-peer learning. This peer-to-peer learning happened when one person was able to achieve a certain task, and when they had that task done, they were then explaining it to others in the group. So, in these scenarios, while there was peer-to-peer learning, there wasn’t enough time for them to finish all the tasks because each person on their own wanted to complete the tasks themselves.

These are the main positive and the negative aspects of the group learning. However, we recognize a kind of a need to “go back” to qualitative studies with these new technologies in new contexts (virtual robot education).  Perhaps smaller volumes of users, but deeper understanding of their and our (systems’) problems.

Also, Quantitative Short-UEQ and SUS results of UC1 user tests somewhat point to that direction but at the same time we need to be aware of phenomenon referred as “the SUS trap”, in which the developer focuses on improving the value instead of user’s experience. More profound and discussive protocol for testing our next innovations could provide useful results both in practice and scientific theory.

3. How much guidance is needed in the virtual training situation?

The video control in the group scenario was also a bit difficult because some people wanted to watch certain parts of the video while other people wanted to watch other parts of the video causing overlap or conflict.

The other thing that was quite clear is that when they got to the real robot, participants struggled to move the robot to the home position. This was both for individual and group learners. The reason for this is that the “how-to” for moving the robot to the home position was never in the theory videos, so the students or the participants were not forced to commit this moving to the home position to their memory.

The way they learned how to move a robot to the home position is through a panel that appeared in the VR environment that gave them the steps in the following way: to do step one, do X, to do step 2, do Y, to do step 3, do Z. Participants followed the steps and easily succeeded in VR. While the VR environment gave them the steps by step instructions to complete task. When they got to the real robot, they had been “conditioned” to have the instruction panel that would guide them, so it was never committed to memory.

This highlights for us what have been a wide-scale fundamental flaw in VR training – letting users follow a set of steps, through checklists and/or similar, to train something in VR. Particularly in situations where there is a limited repeat of the VR training scenario.

We must avoid this because things are not committed to memory when using this checklist-driven approach instead we need to find way to stimulate person to solve issue by letting them recognize what was not understood and to offer access to needed information to solve the situation.

The UC1 pilot will continue with new trial rounds where we expand use of virtual environments step by step and one of upcoming integration into the UC1 use case will add multilingual communication and accessibility within virtual learning environments. Using automatic speech recognition (ASR) and neural machine translation (NMT) technologies, participants will have the ability to work in their own languages while using shared training platforms. In the context of UC1’s robot training, this approach aims to improve inclusivity and accessibility, particularly for international collaboration.

We are thrilled of opportunity to measure and compare traditional and digital approaches with real life use case experiments.

We keep you posted!