Necoverse Learnings 2024

Necoverse people have been busy in 2024 and achieved interesting results, here are the most important learnings form each use case.

UC1 – Training

In Use Case1’s goal is to utilize new metaverse technologies and measure their effectiveness in vocational education, particularly within the Robot Training use case.

The first experiment of the UC1 Robot Training pilot consisted of two different rounds: the first as group work and the second as individual learning. Both sessions utilized combined approaches of browser-based and VR headset-based environments before transitioning to an actual physical robot classroom.

Reflecting on the first year, we identified three interesting findings from transitioning theory to practice during the first training pilot experiment:

  1. Strengths of the Metaverse Environment in Vocational Training: We questioned the need to repeat traditional methods in a digital world while developing a digital training environment for vocational education. Our intention to push training beyond expectations with a concept of digital pedagogy has been a driving force. Virtual environments enable immersive experiences and real-time collaboration, offering a safe setting where trainees can confidently practice even dangerous tasks. One advantage over traditional online simulators is the ability to physically move with the robot; for example, in the Robot use case, operations such as point calibration can be freely viewed from any angle, closely mimicking the real robot experience. This greatly enhances proficiency and hands-on experience for newcomers. Other clear benefits include the ability to repeat training sections as needed and scalability due to virtual instances. The use of the metaverse for training transcends physical space and accessibility limitations, with cross-platform support facilitating easy scaling without the need for special devices, thus promoting equality and lifelong learning. These approaches also create opportunities for international participants and potentially open avenues for training and education exports.
  2. Group vs. Individual Learning in Metaverse Technology: Our experiment aimed to determine whether a combined browser and VR headset-based approach to operating a robot DigitalTwin is more effective in a group or individual learning setting. We found it inefficient in a group setting due to insufficient VR time, preventing completion of tasks. In contrast, individual learners were more likely to complete all tasks. The inefficiency in groups stemmed from the chaos of multiple people trying to control the robot simultaneously, often overlapping and overriding each other’s inputs. While some peer-to-peer learning occurred when individuals explained completed tasks to others, the group format did not allow enough time for everyone to perform the tasks themselves.
  3. Guidance Needed in Virtual Training Situations: Control of video playback was problematic in group scenarios, as participants wanted to view different parts simultaneously, leading to conflicts. Additionally, when working with the real robot, participants struggled to position it correctly because the instructional videos did not cover this. They learned the procedure in the VR environment through a panel providing step-by-step instructions, which they could not recall without prompts when transitioning to the actual robot. This underscores a fundamental flaw in VR training: relying on checklists limits retention of procedures. Future training must encourage problem-solving by allowing participants to identify gaps in their understanding and access necessary information to resolve issues.

The UC1 pilot will continue with new trial rounds where we expand the use of virtual environments step by step. One of the upcoming integrations into the UC1 use case will add multilingual communication and accessibility within virtual learning environments. Using automatic speech recognition (ASR) and neural machine translation (NMT) technologies, participants will have the ability to work in their own languages while using shared training platforms. In the context of UC1’s robot training, this approach aims to improve inclusivity and accessibility, particularly for international collaboration.

UC2 – Architecture and Design

Necoverse Use Case 2 focuses on utilizing metaverse environments as visualization tools, particularly aiding in visual design, construction planning, and communication. The aim of research and development has been to address two visually distinct needs: photorealistic modeling for architectural purposes and more technical visualization for change management during the construction phase. A common goal is to facilitate communication among various stakeholders in a multi-user VR environment.

Challenges and Solutions in the Change Management Environment

From the outset, it was clear that Change Management requires integrating the original plan with an up-to-date status view of the construction site. This integration was achieved by enabling design models and laser-scanned point clouds as visual layers in the VR environment.

The most significant challenges have been related to handling point clouds. Although the primary development focus is to enable communication in the VR environment, these challenges have also influenced the development of tools for this communication platform.

Challenge 1: Rendering Point Clouds in a VR Environment

The first challenge was optimizing point cloud rendering in the Unity development platform. By default, Unity’s rendering system works with triangles (faces), which consist of three points and the area enclosed by their edges. Rendering triangles is computationally expensive compared to rendering individual points in a point cloud. This issue was resolved by changing Unity’s data structures and converting the default triangular topology to a point-based format. This optimization enabled the creation of VR environments containing point clouds with tens of millions of points.

In addition to this topological change, another challenge was controlling the number of rendered points. Although the calculations are simple, processing tens of millions of points simultaneously is computationally heavy due to the sheer volume of operations. To reduce the number of points and avoid creating unnatural-looking patterns in the point cloud, an algorithm was developed to efficiently select points, using computation time proportional to the number of retained points rather than iterating through every point in the point cloud. Randomness was precomputed for each point and applied during the selection process.

Challenge 2: Using Point Clouds for Data Visualization

Laser-scanned point clouds can contain additional data on top of position information, such as color values. From the beginning, it was essential to visualize other point-specific data. A key goal was to present the distance of each scanned point relative to the design model’s surface. This distance information can be visualized either by coloring the points accordingly or by filtering visible points based on their distance.

To achieve this, a method for reading and writing point-specific data was required. Managing such data within files proved challenging due to the large volume of data. The solution was to develop asynchronous data reading and writing, enabling parallel data processing. This approach maintains a smooth user experience in the VR environment, even if heavy computations run in the background.

Exploring User Experience in Virtual Spaces: Top Findings and Lessons

Virtual environments have become a cornerstone of immersive technology, offering unique opportunities for interaction and creativity. Hollinki, together with Turku University of Applied Sciences, conducted a study in the Magical Spaces metaverse environment to explore what makes these spaces enjoyable and effective. Here’s a closer look at the top findings and lessons from the study.

Top 3 Findings

  1. Social Interaction Amplifies Enjoyment: Participants in group settings reported richer experiences compared to those exploring the virtual space alone. Shared activities fostered connection and engagement, making the environment more enjoyable.
  2. Functionality Drives Immersion: Interactive features, such as customizing the environment or ordering drinks from a virtual robot, were pivotal in creating a sense of engagement. These functional elements significantly contributed to positive emotions and active exploration.
  3. Realism Has Mixed Impacts: While some participants appreciated realistic elements for enhancing immersion, others found the discrepancies between real-world and virtual interactions frustrating. This suggests that perfect realism isn’t always necessary and can sometimes detract from the experience.

Top 3 Lessons Learned

  1. Design for Connection: Incorporating features that promote social interaction, like collaborative tasks or shared activities, can elevate user satisfaction and engagement in virtual spaces.
  2. Prioritize Intuitive Interactions: Functional elements should be easy to use and align with users’ expectations to reduce frustration. Designers should balance realism with usability to keep users immersed.
  3. Optimize Space and Layout: Adequate virtual space is crucial, especially for group interactions. Crowded or poorly designed layouts can hinder movement and diminish the experience, underscoring the need for thoughtful spatial design.

Conclusion

This study highlights the importance of creating virtual spaces that foster social connection, provide engaging functionality, and consider user comfort. As virtual environments continue to evolve, these insights will help designers craft experiences that are not only immersive but also deeply enjoyable

UC 3: Remote Inspections

The Necoverse project has provided invaluable insights into the evolving world of remote operations. By exploring applications in cranes, elevators, fire safety systems, and more, we are beginning to understand how cutting-edge technologies can reshape maintenance, logistics, and inspections. Below, we summarize the key learnings our team has gathered so far.

Image: Kiwa Business Development Tuomas Suominen and his colleague, Area Manager Samuli Salmela testing Realwear headset
  1. Business Potential: A Game-Changer with Some Risks Remote inspections have the potential to reduce costs and increase efficiency, with predictive and smart maintenance leading the way. However, businesses are often hesitant to invest in advanced equipment due to concerns about return on investment and reliability. Encouragingly, the decreasing cost of tools like smart glasses makes widespread adoption inevitable. These tools are on the verge of becoming standard for remote collaboration and fieldwork.
  2. Technology Readiness: Progress Meets Practicality The rapid advancement of wearables, robots, AI for defect detection, and remote operation software is evident. Yet, many tasks can still be effectively accomplished with more affordable and accessible devices, such as smartphones. For these solutions to scale, we need to prioritize improving IoT connectivity, enabling safe remote control, and ensuring robust video and audio systems. Moreover, these technologies must be made user-friendly and accessible to encourage widespread adoption.
  3. New Tech Enables Us to Inspect Assets in Hazardous and Dangerous Places Extended reality and the industrial metaverse offer new tools especially designed to tackle challenges of demanding inspections in environments such as nuclear power plants, where human operation is limited. The use of industrial metaverse technologies with advanced user interfaces that can combine real-time high-resolution video streams (e.g., Nokia RXRM technology) and digital twin visualizations will open new disruptive ways to conduct hazardous and dangerous inspections. Especially collaboration is a key element of industrial metaverse solutions, and the same level of immersion cannot be reached using traditional collaboration tools such as teleconferencing.
  4. Standardization: The Missing Piece Strict regulations and a lack of unified standards continue to present challenges, particularly in the inspection industry. For remote inspection technology to realize its full potential, the industry should collaborate to harmonize standards. Other regulated sectors, such as healthcare—where robotics and remote operations are already advancing—offer an inspiring blueprint for how this can be achieved.
  5. Safety: Is On-Site Presence Still Necessary? While remote tools are improving, fully replacing human presence on-site remains a challenge. Safety supervision on-site is currently critical to maintaining compliance and reliability, especially in high-risk environments. The question arises: do inspectors always need to be physically present, or can a trained safety professional suffice? This remains an area for further exploration and debate as technologies evolve.
  6. A Future Shaped by Collaboration and Advocacy From drones inspecting wind turbines to augmented reality (AR) tools aiding fire safety evaluations, our Necoverse pilots highlight the transformative possibilities of these technologies. However, technological progress alone isn’t enough. Collaboration across industries, regulatory alignment, and building trust in these solutions are all crucial. Additionally, internal advocacy is often needed to convince stakeholders to embrace new technologies and ways of working.
  7. Human Impact: The Role of Skills and Adaptability While technology plays a central role in advancing remote inspections, the human factor remains equally critical. Adopting remote inspection technologies requires upskilling the workforce to operate and interpret data from these new tools effectively. Field technicians and inspectors must adapt to using wearables, augmented reality (AR), and AI-driven platforms, which can require significant training and cultural change. Additionally, fostering trust among employees and stakeholders is essential, as many might feel uncertain about the implications of these technologies on job security and traditional roles. Addressing these human concerns is vital for a smooth transition and the widespread acceptance of remote inspection technologies.

Summary: Where Do We Go From Here? The Necoverse project highlights that while the technology for remote inspections may already be more advanced than expected, true progress will depend on harmonized standards, cost reductions, and fostering collaboration across industries. Remote inspections and field collaboration are not just concepts for the future—they are becoming the present, especially in hazardous places. The question is no longer “if” but “how fast” these technologies and procedures can be implemented.

We are excited to demonstrate some of our pilots in the coming months, with updates anticipated by May 2025. Stay tuned for updates as we continue to explore and innovate in this rapidly evolving field!