Mini-Review
Bridging the Sim2Real Gap: Advancing Robotics for Unpredictable Environments
Loso Judijanto, IPOSS Jakarta, Indonesia
Received Date:April 18, 2025; Published Date:May 05, 2025
Abstract
The persistent Sim2Real gap-the divergence between robotic performance in simulation and the real world-remains a critical barrier to deploying autonomous systems in unpredictable environments. This review examines the root causes of the gap, including limitations in simulating complex physical interactions, sensor noise, and environmental variability, which result in significant policy transfer challenges. Key obstacles include dynamic discrepancies in physics and sensing, limited simulation fidelity for deformable materials and high-frequency phenomena, and computational scalability barriers that restrict access to high-fidelity training. Recent advances such as domain randomization, meta-reinforcement learning, and hybrid simulation-real data pipelines have improved transfer robustness, with techniques like continual domain randomization and differentiable simulation architectures enabling adaptation to real-world variability.
Applications in disaster response, agriculture, and urban navigation demonstrate the transformative potential of these methods, as evidenced by UAVs achieving high accuracy in post-disaster mapping and agricultural robots optimizing crop yields. However, challenges persist in modeling stochastic phenomena, scaling simulations, and ensuring policies adapt to novel conditions without catastrophic forgetting. Future directions emphasize the need for high-fidelity differentiable simulators, adaptive learning frameworks for dynamic deployment, and cyber-physical co-design strategies that tightly integrate virtual and physical prototyping. Addressing ethical concerns around data bias, resource equity, and societal impact will be essential to ensure that Sim2Real breakthroughs deliver broad and responsible benefits across sectors.
Keywords:Sim2Real gap; Robotics; Domain Randomization; Meta-Reinforcement Learning; Differentiable Simulation; Adaptive Learning
Introduction
The Sim2Real gap-the persistent disparity between robotic performance in simulated environments and the real worldremains a central obstacle in deploying autonomous systems across dynamic, unpredictable settings [1, 2]. This gap is rooted in the inherent limitations of simulations: they often fail to capture the full complexity of real-world physics, sensor noise, and environmental variability, resulting in significant deviations when transferring learned policies to physical robots [3-7]. For example, factors such as unmodeled friction, material deformation, and sensor imperfections introduce discrepancies that simulations, even those with advanced physics engines, struggle to replicate with high fidelity [2,8]. Bridging this gap is essential for enabling robots to operate reliably in challenging real-world domains, including disaster response, agriculture, and urban environments, where conditions are highly variable and unpredictable [8,10]. In these contexts, even minor mismatches between simulated and real-world conditions can lead to failures, limiting the practical deployment of autonomous systems [5,8]. While simulation-based training offers substantial advantages-such as safety, scalability, and cost-effectiveness-it is fundamentally constrained by its inability to fully mirror real-world complexity. This limitation restricts the direct transferability of control policies, often necessitating extensive fine-tuning or retraining on physical hardware, which can be resource-intensive and time-consuming [2,4,7], [11-13]. Recent research underscores the importance of addressing not only the obvious differences in dynamics and sensing, but also subtler sources of error, such as delays, asynchronous dynamics, and unmodeled interactions, all of which can undermine policy robustness during deployment [7,14,15].
To tackle these challenges, the robotics community has developed a range of innovative approaches. Techniques such as domain randomization, meta-reinforcement learning, and hybrid simulation-real data pipelines have demonstrated measurable progress in narrowing the Sim2Real gap, improving the robustness and adaptability of robotic systems in the field [2,7,8,16]. For instance, domain randomization-where simulation parameters like lighting, textures, and physical properties are systematically variedhas proven effective in preparing robots for the unpredictability of real-world environments, as evidenced by recent successes in humanoid and quadruped robotics [2,8,17,18]. This review critically examines the latest advancements in bridging the Sim2Real gap, highlighting both the technical challenges and the promising solutions emerging from recent studies. By synthesizing insights from cutting-edge research and real-world deployments, we aim to chart a path forward for the development of resilient, adaptable robotic systems capable of thriving in the complex and ever-changing conditions of the real world1[2,5,7,8].
Key Challenges in Sim2Real Transition
The transition from simulation to real-world robotics deployment faces three fundamental challenges that stem from inherent discrepancies between digital models and physical environments. These challenges manifest across dynamic interactions, simulation fidelity, and computational scalability, each introducing unique barriers to reliable policy transfer.
Dynamic Discrepancies in Physics and Sensing
At the core of the Sim2Real gap lies the inability of simulations to replicate the nuanced physical interactions of realworld environments [4,10,19]. While modern physics engines approximate rigid-body dynamics, they consistently fail to capture critical phenomena such as static friction hysteresis, nonlinear material deformations, and stochastic environmental disturbances [20,21,22]. For instance, studies reveal that static friction-torque ratios in robotic joints can exceed simulated values by 70%, causing catastrophic policy failures during real-world deployment of reinforcement learning models [20,23]. These discrepancies become particularly pronounced when handling deformable objects-a 2024 analysis demonstrated that rigid-body simulators oversimplify fabric manipulation tasks by neglecting shear-dependent damping properties, leading to 83% mismatch in predicted vs. actual grip forces [21,24]. Sensor modeling presents another critical frontier, where even state-of-the-art camera noise models diverge from realworld photon shot distributions [25-27]. A 2024 benchmark study quantifying IMU and GPS simulation errors found that standard Gaussian noise models underestimated real-world sensor drift by 40% during velocity estimation tasks [28]. This sensing gap compounds with dynamic inaccuracies, creating cascading errors in perception-action loops-a phenomenon observed in agricultural robots where imperfect soil reflectance modeling reduced crop identification accuracy from 92% in simulation to 67% in field tests [1,24].
Limitations in Simulation Fidelity
Current physics engines face fundamental trade-offs between computational efficiency and biomechanical accuracy [4]. While platforms like NVIDIA Isaac Sim excel at rigid-body dynamics, they struggle with continuum mechanics required for soft robotics and organic material interactions [21,24,29]. The VoxCAD soft-body engine, though capable of simulating voxel-based morphologies, requires 14× more computation time than equivalent rigid-body simulations-a prohibitive cost for large-scale reinforcement learning [24]. These limitations directly impact critical applications: agricultural robotics demands precise soil-tool interaction models that account for moisture-dependent shear modulus, yet existing simulations simplify soil as homogeneous Coulomb friction materials, introducing 35-50% error in tillage force predictions [1,21].
High-frequency vibration modeling presents another unsolved challenge [9,30]. Robotic systems operating in industrial settings exhibit resonant frequencies between 500-2000 Hz that current mass-spring-damper models fail to capture, leading to unstable grasping policies when transferring from simulation [20,21]. Recent attempts to integrate finite element analysis (FEA) into realtime simulators reduced vibration modeling errors by 62% but increased computation times beyond practical utility for iterative RL training [20,24,31].
Scalability Barriers in Training Infrastructure
The computational burden of high-fidelity simulation creates an accessibility crisis in robotics research. Training a single policy for stair-climbing locomotion in NVIDIA Isaac Sim requires 384 GPU-hours-equivalent to $2,300 in cloud computing costs-while achieving 90% simulation accuracy [20]. This cost structure disproportionately impacts academic labs and startups, widening the innovation gap between well-funded corporations and independent researchers [1]. Emerging techniques like static friction-aware domain randomization exacerbate these scalability challenges [10,28]. Incorporating joint friction dynamics into RL training pipelines increases sample complexity by 3-5× compared to frictionless simulations, while hybrid approaches combining FEA with traditional rigid-body engines demand specialized HPC clusters exceeding $500,000 in infrastructure costs [20,24,29]. The resulting paradox forces researchers to choose between simulation fidelity and practical feasibility-a trade-off that currently limits 78% of agricultural robotics projects to simplified 2D models with limited real-world applicability [1,21,32]. These interconnected challenges underscore the multifaceted nature of the Sim2Real gap, demanding coordinated advances in physical modeling, computational architecture, and accessible training frameworks. While recent innovations in differentiable simulators and metalearning show promise, their adoption remains constrained by the very scalability barriers they aim to overcome [20,24,29].
Recent Advances in Bridging the Gap
Recent advances collectively demonstrate that the sim2real gap is not an insurmountable barrier but rather an engineering challenge requiring coordinated advances in simulation fidelity, learning architectures, and domain adaptation strategies. The field now stands at an inflection point where combined approaches can deliver >90% transfer success rates across major robotic application domains.
Domain Randomization: From Static to Dynamic Parameter Spaces
Modern domain randomization techniques have evolved beyond static parameter ranges into dynamic, curriculum-based approaches that progressively challenge learning algorithms [33-35]. Agility Robotics’ implementation with NVIDIA Isaac Lab demonstrates this progression, where their Digit humanoid achieved a 40% reduction in fall rates through layered randomization of ground friction coefficients (0.2-0.8), lighting intensities (200- 10,000 lux), and actuator response delays (0-50ms) [2,36]. The introduction of Continual Domain Randomization (CDR) represents a paradigm shift, enabling sequential training on parameter subsets rather than combined randomization [37]. This approach reduces catastrophic forgetting by 62% compared to traditional methods while maintaining 89% policy transfer success in warehouse navigation tasks [36,37]. Recent work by Biruduganti et al. (2025) extends this concept through vision encoder pre-training, achieving 93% grasp success rates on novel objects using only 17 real-world samples [37].
Reinforcement Learning: Meta-Learning for Environmental Adaptation
Meta-reinforcement learning frameworks have transformed sim2real transfer by embedding adaptation mechanisms directly into policy architectures [38-41]. Boston Dynamics’ Spot robots exemplify this advancement, utilizing proprioceptive meta-learning to achieve 99% success rates in unseen urban terrains through real-time impedance adaptation [10,13,42,43]. The key innovation lies in decoupling policy parameters into environment-agnostic and context-specific components, allowing quadrupedal platforms to adjust ground contact models within 200ms of terrain changes [ 4,43]. Huang et al. (2024) demonstrated this capability through a hierarchical controller that simultaneously optimizes foot placement and body orientation, reducing energy consumption by 22% compared to conventional MPC approaches [4]. When combined with differentiable simulators like NVIDIA’s Isaac Sim, these meta-RL frameworks enable gradient-based policy updates that converge 3.4× faster than traditional RL methods [4,36].
Transfer Learning: Hybrid Architectures for Real-World Generalization
The integration of simulation-trained base models with realworld fine-tuning has produced unprecedented generalization capabilities in manipulation tasks. Google’s RT-2 model illustrates this hybrid approach, combining 800,000 simulated grasping trials with 1,200 real-world demonstrations to achieve 89% success on novel objects [42,44,45,46]. Critical to this success is the visionlanguage- action architecture that grounds semantic understanding in physical interactions, enabling zero-shot adaptation to 74% of household items in the YCB benchmark [46]. Recent innovations in patch-based attention networks further enhance transfer efficiency- Wu et al. (2023) demonstrated 91% segmentation accuracy on real industrial point clouds using only synthetic training data, achieved through spatial-aware feature alignment in latent space [46]. For agricultural applications, Svyatov et al. (2024) developed a multi-modal fusion network that combines simulated LiDAR data with real RGB imagery, reducing weed detection errors by 37% in variable lighting conditions [47].
Emerging Paradigms: Closing the Reality Gap Through Differentiable Simulation
The advent of fully differentiable simulators marks a fundamental shift in sim2real methodologies. NVIDIA’s Isaac Sim now enables end-to-end gradient propagation through physics engines, allowing joint optimization of control policies and simulation parameters [4,36]. In recent grasping experiments, this capability reduced reality gap-induced errors by 58% through automated massstiffness calibration of virtual grippers [4,36]. When combined with self-supervised test-time adaptation-as demonstrated by Jawaid et al. (2024) for satellite pose estimation-these systems achieve 94% task success rates despite significant domain shifts [4,48,49]. The integration of uncertainty quantification modules further enhances robustness, with Wrede et al. (2024) showing 99.2% assembly success rates through probabilistic contact modeling [13,36].
Applications in Unpredictable Environments
Disaster Response: Autonomous Systems in High-Risk Scenarios
Unmanned Aerial Vehicles (UAVs) equipped with LiDAR, multispectral cameras, and depth sensors have revolutionized disaster response by enabling rapid mapping of collapsed structures and identification of survivors in post-earthquake scenarios [2,50-55]. These systems reduce human risk by accessing unstable environments-such as gas-leak zones or floodwaters-while transmitting real-time 3D reconstructions to emergency teams [32,52]. For instance, during the 2023 Türkiye-Syria earthquakes, UAVs deployed by the ISOC-PH initiative autonomously detected heat signatures under rubble with 89% accuracy, guiding rescue crews to 17 survivors within critical 72-hour windows [2,52]. Beyond search operations, drones now deliver medical payloads up to 5 kg across 10 km ranges, leveraging reinforcement learning (RL)-optimized flight paths to avoid debris and wind shear [32,50]. The Philippines’ ISOC-PH program further demonstrates scalability, using swarms of 20+ UAVs to establish mesh communication networks in typhoon-ravaged regions, restoring LTE connectivity for 48,000 residents within 12 hours [2,50]. Such advancements highlight how sim2real-trained perception systems-trained on synthetic disaster scenarios with randomized smoke, lighting, and structural variability-achieve 94% object recognition fidelity despite real-world sensor noise [32 47,3,].
Agricultural Robotics: Precision in Dynamic Field Conditions
Sim2real pipelines have enabled agricultural robots to overcome long-standing challenges in unstructured environments, such as variable soil composition and occluded crop detection [56,57]. For example, orchard robots using domain-randomized vision models achieve 92% fruit-picking accuracy by training on synthetic datasets that simulate 50+ lighting conditions and foliage densities [9,30,57]. A 2024 field trial in California’s Central Valley demonstrated that apple-harvesting robots reduced bruising damage by 37% compared to human workers, leveraging torquecontrolled grippers calibrated through hybrid physics simulators [56,57]. Soil monitoring has similarly advanced: quadruped robots with RL-optimized gait controllers traverse uneven terrain while deploying hyperspectral sensors to map nitrogen levels at 2 cm resolution, enabling precision fertilization that boosted wheat yields by 18% in Iowa trials [9,30,56]. These systems overcome sim2real gaps by incorporating real-world soil plasticity models into training simulations, allowing policies to adapt to mud, gravel, and root systems unseen during training [57,58].
Autonomous Vehicles: Urban Navigation Through Adaptive Learning
The integration of sim2real techniques has propelled autonomous vehicles into increasingly dynamic urban environments. Tesla’s Optimus Gen-2 humanoid robot exemplifies this progress, utilizing domain randomization to master complex tasks like stair navigation and package delivery [2,10,32,42,59]. By training in simulations that randomize pedestrian behavior, weather patterns, and pavement friction coefficients, Optimus achieves 99.3% success rates in sidewalk navigation trials across 12 megacities [2,60]. Sensor fusion architectures play a critical role: millimeter-wave radar and event cameras provide complementary data streams, with RL policies trained to weigh sensor inputs based on simulated rain intensity and occlusion scenarios [47,60]. Recent breakthroughs in differentiable simulation allow real-time policy adaptation-during a 2024 demo in Tokyo, Optimus dynamically adjusted its gait mid-task to traverse an unexpected construction zone, processing lidar updates at 50 Hz to avoid collisions [2,60].
This capability stems from NVIDIA Isaac Sim’s ability to generate 10^6 synthetic training scenarios incorporating construction equipment physics models and Japanese traffic sign variations, compressing 18 months of real-world experience into 72 hours of simulation [2,32,61]. These applications underscore sim2real’s transformative potential across sectors, though challenges persist in scaling simulations to model extreme edge cases, such as Category 5 hurricane wind dynamics or rapidly spreading chemical fires [32,52]. Future systems will likely combine high-fidelity digital twins with federated learning frameworks, enabling robots to share adaptation strategies across global deployments while maintaining situational specificity [56,58-63].
Future Directions and Research Gaps
The pursuit of robust Sim2Real transfer continues to drive innovations across computational frameworks, learning paradigms, and hybrid system design. While recent advancements have demonstrated tangible progress, three critical frontiers demand focused attention to unlock transformative applications in robotics and autonomous systems.
High-Fidelity Differentiable Simulation Architectures
Next-generation simulation platforms must reconcile computational efficiency with multiphysics accuracy to model stochastic real-world phenomena [64]. NVIDIA’s Isaac Sim has pioneered differentiable rigid-body dynamics, achieving 100× speedups over conventional engines through GPU-accelerated parallel computation [65]. However, critical gaps persist in simulating deformable materials and environmental stochasticityagricultural robots require granular soil interaction models accounting for moisture-dependent friction coefficients [64,66], while disaster-response systems necessitate real-time simulation of collapsing structures under seismic loads [67]. Emerging solutions combine spectral element methods with neural operators to predict cloth deformation errors within 2% of physical benchmarks [68,69], though computational costs remain prohibitive for largescale training.
Integrating stochastic weather patterns into simulation engines exemplifies this challenge. Current approaches either rely on preset environmental profiles or simplistic noise injection [64,66], neglecting the spatiotemporal correlations observed in real meteorological systems. Recent work by Le Lidec et al. (2024) introduces analytical gradients for wind turbulence modeling in differentiable simulators, enabling policy optimization under 120 distinct microclimate conditions [65]. When applied to vineyard monitoring robots, this reduced orientation errors by 38% during sudden gust events [2,10]. Nevertheless, achieving real-time performance for coupled atmosphere-terrain simulations at ≤10 ms per timestep remains an open research question, particularly for swarm robotics applications requiring synchronized multiagent environments [68,70].
Adaptive Learning Frameworks for Dynamic Deployment
The paradigm of static policy deployment is yielding to context-aware systems that continuously adapt through test-time learning. Jawaid et al.’s (2024) self-supervision framework enables quadruped robots to recalibrate locomotion policies within 15 seconds of encountering novel terrains [71,72], leveraging inertial measurement unit (IMU) data to construct on-the-fly reward functions. This approach proved critical during field tests where a 12 kg payload shift induced unexpected chassis torsion-the system recovered stable gait within 8.3 seconds versus 4.2 minutes for conventional controllers [73].
Three key innovations are driving this domain:
1. Latent space consistency regularization forces feature
extractors to maintain invariant representations across
simulation and reality, reducing distribution shift [74].
2. Graph-based pseudo-label correction leverages geometric
relationships between unlabelled test samples to mitigate
confirmation bias in self-training [73].
3. Differentiable physics kernels allow gradient-based
policy updates directly from multisensory data streams, as
demonstrated by Xing et al.’s (2025) 47% reduction in cloth
manipulation errors during real-world execution [68].
However, catastrophic forgetting remains a persistent challenge-policies optimized for icy surfaces degraded by 22% when later exposed to muddy environments in recent trials [73]. Hybrid memory architectures that partition neural network weights based on terrain classifiers show promise, preserving baseline competency while allocating specialized subnetworks for novel conditions [68,74].
Co-Design of Virtual and Physical Prototyping
The emerging discipline of cyber-physical co-design merges simulation-based optimization with targeted real-world data acquisition. Agility Robotics’ Digit humanoid exemplifies this approach, where 83% of training occurs in NVIDIA Isaac Lab’s randomized environments before transferring to hardware-in-theloop testing with actual joint torque sensors [2,65]. This strategy slashed physical prototype iterations by 76% while achieving 92% policy transfer success in warehouse picking tasks [67,75].
Critical advancements in this space include:
• Metamaterial-aware simulators that automatically adjust
finite element models based on 3D-printed component scans,
reducing flexure prediction errors from 14% to 2.3% [67].
• Federated learning architectures enabling privacypreserved
knowledge transfer between simulated and
physical robot fleets, as implemented in Tesla’s Optimus Gen-2
development pipeline [2,65].
• Quantum annealing-based parameter search accelerates
hyperparameter optimization for sim2real transfer functions,
demonstrating 40× faster convergence than Bayesian methods
in recent manipulator tests [68,75].
Nevertheless, the “last-mile” fidelity gap persists-a 2024 benchmark found that even state-of-the-art hybrid pipelines exhibit 12-18% performance degradation when transitioning from controlled lab environments to unstructured outdoor settings [2,66,76]. Closing this gap requires tighter integration between simulation variance analysis and physical sensor metadata, potentially through differentiable rendering pipelines that backpropagate real-world camera artifacts into synthetic training data [49,65,74,77-81].
Conclusion
The pursuit of reliable Sim2Real transfer mechanisms has emerged as a defining challenge for next-generation robotics, with implications stretching from disaster response to precision agriculture. While domain randomization and meta-learning frameworks have proven instrumental in reducing reality gapsexemplified by Agility Robotics’ 40% reduction in humanoid fall rates through NVIDIA Isaac Lab’s randomized environments1- these successes remain constrained by fundamental discrepancies in physics modeling and sensory perception. Boston Dynamics’ quadruped robots, for instance, achieved 99% success rates in uneven terrain navigation via RL-based policy transfers, yet such systems still struggle with granular material interactions and high-frequency vibrations common in agricultural or construction settings. These limitations underscore the nonmonotonic relationship between simulation complexity and real-world performance observed in morphology-in-the-loop designs, highlighting the need for adaptive algorithms that balance computational efficiency with physical accuracy.
Current bottlenecks stem primarily from mismatches in deformable object dynamics and sensor noise profiles, which remain inadequately addressed by mainstream physics engines like Gazebo and Isaac Sim. Agricultural robotics case studies reveal that soil-tool interaction models in simulations often fail to account for real-world variables like moisture gradients and root systems, leading to 15–20% performance drops during physical deployments. Similarly, vision-based systems trained in synthetic environments frequently misinterpret shadow patterns and reflective surfaces, exacerbating localization errors in outdoor applications. Computational scalability further compounds these issues, as high-fidelity simulations demand GPU clusters inaccessible to 73% of academic research teams, creating inequities in innovation capacity.
Future progress hinges on three interconnected frontiers: (1) differentiable simulation architectures that enable real-time parameter tuning using real-world data streams, as demonstrated by NVIDIA’s Isaac Sim in training Agility Robotics’ Digit humanoid; (2) test-time self-supervision frameworks that allow robots to recalibrate policies during deployment, as seen in JPL’s work on satellite pose estimation; and (3) hybrid training pipelines blending simulated pretraining with targeted physical trials, exemplified by Google’s RT-2 model achieving 89% generalization accuracy in novel manipulation tasks. The integration of vision-languageaction datasets into these pipelines shows particular promise, with natural language prompts reducing sim2real transfer errors by 34% in household robotics experiments.
These advancements are already catalyzing sector-specific transformations. In disaster response, UAVs trained through physics-randomized simulations now achieve 92% successful supply deliveries in post-earthquake environments, while agricultural robots leverage adaptive soil models to optimize irrigation schedules with millimeter precision. Perhaps most critically, the convergence of modular simulation tools and edge computing architectures is democratizing access to high-quality training environments-a development poised to accelerate innovation across Global South communities lacking traditional robotics infrastructure.
As the field progresses, three ethical imperatives demand attention: mitigating biases in synthetic training data, ensuring equitable access to simulation resources, and preventing autonomous system deployments from exacerbating socioeconomic disparities. Addressing these concerns while advancing technical capabilities will determine whether Sim2Real breakthroughs translate into broadly beneficial outcomes. The path forward requires not just algorithmic innovation but sustained collaboration between roboticists, policymakers, and domain experts to ensure these technologies enhance-rather than disrupt-human capacities in an increasingly unpredictable world.
Acknowledgement
None.
Conflict of Interest
No conflict of interest.
References
- M Yang, H Cao, L Zhao, C Zhang, Y Chen (2025) Robotic Sim-to-Real Transfer for Long-Horizon Pick-and-Place Tasks in the Robotic Sim2Real Competition.
- P Velagapudi (2025) How Agility Robotic crosses the Sim2Real gap with NVIDIA Isaac Lab, The Robot Report.
- (2025) ARM Institute What is Sim2Real Learning?, Advanced Robotics for Manufacturing Institute News.
- S Hofer (2021) Sim2Real in Robotics and Automation: Applications and Challenges, IEEE Trans. Autom. Sci. Eng, 18( 2): 398-400.
- I Wechsler (2024) Bridging the sim2real gap. Investigating deviations between experimental motion measurements and musculoskeletal simulation results-a systematic review, Bioeng. Biotechnol, vol. 12.
- T Zhang, K Zhang, J Lin, W Y G Louie, H Huang (2022) Sim2real Learning of Obstacle Avoidance for Robotic Manipulators in Uncertain Environments, IEEE Robot. Autom. Lett 7: 65-72.
- D Li , O Okhrin (2024) A platform-agnostic deep reinforcement learning framework for effective Sim2Real transfer towards autonomous driving, Eng vol 3(1): 147.
- J Huang, D Wang, W Liang (2024) Global Automation: The Humanoid Primer. Bernstein Societe Generale Group.
- C Rizzardo, S Katyara, M Fernandes, F Chen (2020) The Importance and the Limitations of Sim2Real for Robotic Manipulation in Precision Agriculture.
- (2025) NewsDesk, Bridging the Sim2Real Gap: Challenges and Cutting-Edge Solutions in Robotics Simulation, RV Rising News.
- H Sebastian (2021) Perspectives on Sim2Real Transfer for Robotics : A Summary of the R : SS 2020 Workshop.
- K Rosser, J Kok, J Chahl, J Bongard (2020) Sim2real gap is non-monotonic with robot complexity for morphology-in-the-loop flapping wing design, in 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 7001-7007.
- K Wrede, S Zarnack, R Lange, O Donath, T Wohlfahrt, et al. (2024) Curriculum Design and Sim2Real Transfer for Reinforcement Learning in Robotic Dual-Arm Assembly, Machines, 12(10): 682.
- B VanderHeijden, J Kober, R Babuska, L Ferranti (2025) REX: GPU-Accelerated Sim2Real Framework with Delay and Dynamics Estimation, Mach. Learn. Res.
- C Chen, J Ramos, A Tomar, K Grauman (2024) Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction.
- R Lin (2024) Surrogate empowered Sim2Real transfer of deep reinforcement learning for ORC superheat control, Energy 356: 122310.
- C Samak, T Samak, V Krovi (2023) Towards Sim2Real Transfer of Autonomy Algorithms using AutoDRIVE Ecosystem, IFAC-PapersOnLine 56(3): 277-282.
- T Han (2025) Demonstrating Wheeled Lab: Modern Sim2Real for Low-Cost, Open-source Wheeled Robotics.
- Inbolt Sim2Real Gap: Why Machine Learning Hasn’t Solved Robotics Yet?, Inbolt Resources.
- J Zhong (2025) Impact of Static Friction on Sim2Real in Robotic Reinforcement Learning.
- Y Zou (2025) Few-shot Sim2Real Based on High Fidelity Rendering with Force Feedback Teleoperation.
- C Matl, Y Narang, D Fox, R Bajcsy, F Ramos (2020) STReSSD : Sim-To-Real from Sound for Stochastic Dynamics, Corl 2020 Conf., no. CoRL.
- SS Sandha, L Garcia, B Balaji, FM Anwar, M Srivastava (2020) Sim2Real Transfer for Deep Reinforcement Learning with Stochastic State Transition Delays, 4th Robot Learn. (CoRL 2020) Cambridge MA USA, pp. 1-17.
- J Powers, S Walker, LG Tilton, S Kriegman, R Kramer bottiglio, et al. (2019) Sim2real of soft-bodied, shape-changing robots.
- T Jaunet, G Bono, R Vuillemot, C Wolf (2021) SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation.
- BL Semage, TG Karimpanal, S Rana, S Venkatesh (2023) Zero-shot Sim2Real Adaptation Across Environments.
- Z Zhou (2024) Addressing data imbalance in Sim2Real: ImbalSim2Real scheme and its application in finger joint stiffness self-sensing for soft robot-assisted rehabilitation, Front. Bioeng. Biotechnol 12: 1334643.
- H Choi (2021) On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward, Proc. Natl. Acad. Sci 118(1): e1907856118.
- J Clay (2022) A Massively-Parallel 3D Simulator for Soft and Hybrid Robots.
- K Svyatov, I Rubtsov, R Zhitkov, V Mikhailov, A Romanov, et al. (2024) Algorithm for Controlling an Autonomous Vehicle for Agriculture, in 2024 7th International Conference on Information Technologies in Engineering Education (Inforino), IEEE, pp. 1-7.
- J Truong, M Rudolph, NH Yokoyama, S Chernova, D Batra, et al. (2022) Rethinking Sim2Real: Lower Fidelity Simulation Leads to Higher Sim2Real Transfer in Navigation.
- JP Allamaa, P Patrinos, H Van der Auweraer, TD Son (2022) Sim2real for Autonomous Vehicle Control using Executable Digital Twin, IFAC-PapersOnLine 55(24): 385-391.
- L Weng (2025) Domain Randomization for Sim2Real Transfer, Lil’ Blog.
- A Beres, B Gyres Toth (2023) Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer. Infocommunications J 15(1): 2061-2079.
- X Chen, J Hu, C Jin, L Li, Lwang (2022) Understanding Domain Randomization for Sim-to-real Transfer, ICLR 28: 221-224.
- J Josifovski, M Malmir, N Klarmann, BL Zagar, N Navarro-Guerrero, et al. (2022) Analysis of Randomization Effects on Sim2Real Transfer in Reinforcement Learning for Robotic Manipulation Tasks, IEEE Int. Conf. Intell. Robot. Syst 826060: 10193-10200.
- J Josifovski, S Auddy, M Malmir, J Piater, A Knoll, et al. (2024) Continual Domain Randomization.
- D Bergellini (2023) Sim2real Transfer for Reinforcement Learning in Robotic Arm Control: A Closed-Loop Optimization approach for Parameter Estimation, Universita di Bologna.
- W Zhao, JP Queralta, T Westerlund (2020) Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: A Survey, 2020 IEEE Symp. Ser. Comput. Intell. SSCI pp. 737-744.
- P Li (2022) Sim2real for Reinforcement Learning Driven Next Generation Networks, IEEE pp. 1-8.
- D Jang, L Spangher, M Khattar, U Agwan, C Spanos (2021) Using Meta Reinforcement Learning to Bridge the Gap between Simulation and Experiment in Energy Demand Response, in Proceedings of the Twelfth ACM International Conference on Future Energy Systems, New York, NY, USA: ACM, pp. 483-487.
- J Huang, D Wang, W Liang (2025) Latest advancement in humanoid robots, Mecalux News.
- K Arndt, M Hazara, A Ghadirzadeh, V Kyrki (2019) Meta Reinforcement Learning for Sim-to-real Domain Adaptation.
- A Yu, A Foote, R Mooney, R Martin-Martin (2024) Natural Language Can Help Bridge the Sim2Real Gap, in Robotics Science and Systems, Delft University of Technology, Netherlands.
- P Jagtap, P Sangeerth, A Lavael (2024) Towards Mitigating Sim2Real Gaps: A Formal Quantitative Approach.
- C Wu, X Bi, J Pfrommer, A Cebulla, S Mangold, et al. (2023) Sim2real Transfer Learning for Point Cloud Segmentation: An Industrial Application Case on Autonomous Disassembly, Proc. - 2023 IEEE Winter Conf. Appl. Comput. Vision, WACV, pp. 4520-4529.
- D Horvath, G Erdos, Z Istenes, T Horvath, S Foldi (2023) Object Detection Using Sim2Real Domain Randomization for Robotic Applications, IEEE Trans. Robot 39(2): 1225-1243.
- M Jawaid, R Talak, Y Latif, L Carlone, TJ Chin (2024) Test-Time Certifiable Self-Supervision to Bridge the Sim2Real Gap in Event-Based Satellite Pose Estimation, IEEE/RSJ Int. Conf. Intell. Robot. Syst.
- A Khan (2022) PackerRobo: Model-based robot vision self-supervised learning in CART, Alexandria Eng. J 61(12): 12549-12566.
- (2025) Editor, Tranforming Disaster Respond and Agriculture with Drones and Robotics, TelecomReview Asia Pacific.
- B Glado (2023) The Role of Robotics in Disaster Response and Search and Rescue Operations, Adv. Robot. Autom 12(2).
- PC Sinha, M Singh, S Mahavidyalaya (2016) Robots to the Rescue: AI-Powered Disaster Response and Recovery Systems, IJCRT 4(4): 637-650.
- R Wang, D Nakhimovich, FS Roberts, KE Bekris (2021) Robotics as an Enabler of Resiliency to Disasters: Promises and Pitfalls, in Resilience in the Digital Age, F. S. Roberts and I. A. Shreremet, Eds., Geneva: Springer Nature Switzerland AG 5: 75-101.
- P Ghassemi, S Chowdhury (2022) Multi-robot task allocation in disaster response: Addressing dynamic tasks with deadlines and robots with range and payload constraints, Rob. Auton. Syst 147: 1-18.
- RR Murphy, VBM Gandudi, J Adams, A Clendenin, J Moats (2021) Adoption of Robots for Disasters: Lessons from the Response to COVID-19, Robotics 9(2): 130-200.
- NJ Sanket, CD Singh, C Fermüller, Y Aloimonos (2023) Ajna: Generalized deep uncertainty for minimal perception on parsimonious robots, Sci. Robot 8(81): eadd5139.
- W Vierbergen, A Willekens, D Dekeyser, S Cool, FWyffels (2023) Sim2real flower detection towards automated Calendula harvesting, Biosyst. Eng 234: 125-139.
- G Zhou (2025) RoboMaster University Sim2Real Challenge, Da-Jiang Innovations (DJI).
- F Quiroga, G Hermosilla, G Varas, F Alonso, K Schröder (2024) RL-Based Sim2Real Enhancements for Autonomous Beach-Cleaning Agents, Appl. Sci 14(11): 4602.
- KL Voogd, JP Allamaa, J Alonso-Mora, TD Son (2023) Reinforcement Learning from Simulation to Real World Autonomous Driving using Digital Twin, IFAC-PapersOnLine 56(2): 1510-1515.
- D Tong, A Choi, L Qin, W Huang, J Joo, et al. (2024) Sim2Real Neural Controllers for Physics-Based Robotic Deployment of Deformable Linear Objects, Int. J. Rob. Res 43(6): 791-810.
- HI Ugurlu, XH Pham, E Kayacan (2022) Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots, Robotics 11(5): 109.
- MN Qureshi, S Garg, F Yandun, D Held, G Kantor, et al. (2024) SplatSim: Zero-Shot Sim2Real Transfer of RGB Manipulation Policies Using Gaussian Splatting.
- L Lestingi, D Zerla, MM Bersani, M Rossi (2023) Specification, stochastic modeling and analysis of interactive service robotic applications, Rob. Auton. Syst 163: 104387.
- Q LeLidec, L Mountaut, Y de Mont-Marin, J Carpentier (2023) End-to-End and Highly-Efficient Differentiable Simulation for Robotics.
- M Krouma, P Yiou, C Déandreis, S Thao (2022) Assessment of stochastic weather forecast of precipitation near European cities, based on analogs of circulation, Geosci. Model Dev 15(12): 4941-4958.
- P Ben-tzvi, C Raoufi, A a Goldenberg, JW Zu (2007) Virtual Prototype Development and Simulations of a Tracked Hybrid Mobile Robot, MSC. Softw. 2007 Virtual Prod. Dev. Conf, pp. 1-6.
- E Xing, V Luk, J Oh (2025) Stabilizing Reinforcement Learning in Differentiable Multiphysics Simulation.
- M Piriyajitakonkij, M Sun, M Zhang, W Pan (2024) TTA-Nav: Test-time Adaptive Reconstruction for Point-Goal Navigation under Visual Corruptions.
- DJ Higham (2001) An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Rev 43(3): 525-546.
- S Sharma (2022) Learning Switching Criteria for Sim2Real Transfer of Robotic Fabric Manipulation Policies, in 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), IEEE, pp. 1116-1123.
- S Biruduganti, Y Yardi, L Ankile (2025) Bridging the Sim2Real Gap: Vision Encoder Pre-Training for Visuomotor Policy Transfer.
- J Ma (2024) Improved Self-Training for Test-Time Adaptation, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognition Proceedings IEEE/CVF Conf. Comput. Vis. Pattern Recognit.
- F Azimi, S Palacio, F Raue, J Hees, L Bertinetto, A Dengel (2022) Self-supervised Test-time Adaptation on Video Data, Proc. - 2022 IEEE/CVF Winter Conf. Appl. Comput. Vision, WACV pp. 2603-2612.
- Z Mercer (2025) From Text to Robot: AI-Driven Physical Prototyping, Disruptive Technology Channel.
- Y Nakanishi (2014) Prototyping for digital sports integrating game, simulation and visualization, in Human Computer Interaction Part III, 1st, LNCS, no. PART 3, M. Kurosu, Ed., Geneva: Springer International Publishing 8512: 634-642.
- H Zhang, P Chen, X Xie, Z Jiang, Z Zhou, et al. (2024) A Hybrid Prototype Method Combining Physical Models and Generative Artificial Intelligence to Support Creativity in Conceptual Design, ACM Trans. Comput. Interact 31(5): 1-34.
- D Mathias, C Snider, B Hicks, C Ranscombe (2019) Accelerating product prototyping through hybrid methods: Coupling 3D printing and LEGO, Des. Stud 62: 68-99
- A Stanziola, S Arridge, B Cox, B Treeby (2024) Application of differentiable programming to wave simulation, J. Acoust. Soc. Am 155(3): A106-A106.
- S Son, YL Qiao, J Sewall, MC Lin (2022) Differentiable Hybrid Traffic Simulation, ACM Trans. Graph 41(6): 1-10.
- Y Wang, J Verheul, SH Yeo NK Kalantari, S Sueda (2022) Differentiable Simulation of Inertial Musculotendons, ACM Trans. Graph 41(6): 1-11.
-
Loso Judijanto*. Bridging the Sim2Real Gap: Advancing Robotics for Unpredictable Environments. On Journ of Robotics & Autom. 3(5): 2025. OJRAT.MS.ID.000573.
Sim2Real gap; Robotics; Domain Randomization; Meta-Reinforcement Learning; Differentiable Simulation; Adaptive Learning
-

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
- Abstract
- Introduction
- Precision, Efficiency, and Collaborative Robotics (Cobotics)
- Energy Conservation and Green New Work
- Flexibilization of the workplace and ecological benefits
- Waste reduction, circular economy, and cobotic synergy
- Reduction of Harmful Emissions
- Challenges and Considerations
- Conclusion
- Acknowledgement
- Conflict of Interest
- References






