Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Fiorentino F, Prociuk D, Espinosa Gonzalez AB, Neves AL, Husain L, Ramtale S, Mi E, Mi E, Macartney J, Anand S, Sherlock J, Saravanakumar K, Mayer E, de Lusignan S, Greenhalgh T, Delaney Bet al., 2021,

    An early warning risk prediction tool (RECAP-V1) for patients diagnosed with COVID-19: the protocol for a statistical analysis plan

    , JMIR Research Protocols, Vol: 10, ISSN: 1929-0748

    Background:Since the start of the Covid-19 pandemic efforts have been made to develop early warning risk scores to help clinicians decide which patient is likely to deteriorate and require hospitalisation. The RECAP (Remote COVID Assessment in Primary Care) study investigates the predictive risk of hospitalisation, deterioration, and death of patients with confirmed COVID-19, based on a set of parameters chosen through a Delphi process done by clinicians. The study aims to use rich data collected remotely through the use of electronic data templates integrated in the electronic health systems of a number of general practices across the UK to construct accurate predictive models that will use pre-existing conditions and monitoring data of a patient’s clinical parameters such as blood oxygen saturation to make reliable predictions as to the patient’s risk of hospital admission, deterioration, and death.Objective:We outline the statistical methods to build the prediction model to be used in the prioritisation of patients in the primary care setting. The statistical analysis plan for the RECAP study includes as primary outcome the development and validation of the RECAP-V1 prediction model. Such prediction model will be adapted as a three-category risk score split into red (high risk), amber (medium risk), and green (low risk) for any patient with suspected covid-19. The model will predict risk of deterioration, hospitalisation, and death.Methods:After the data has been collected, we will assess the degree of missingness and use a combination of traditional data imputation using multiple imputation by chained equations, as well as more novel machine learning approaches to impute the missing data for the final analysis. For predictive model development we will use multiple logistic regressions to construct the model on a training dataset, as well as validating the model on an independent dataset. The model will also be applied for multiple different datasets

  • Journal article
    Fiorentino F, Prociuk D, Espinosa Gonzalez AB, Neves AL, Husain L, Ramtale SC, Mi E, Mi E, Macartney J, Anand SN, Sherlock J, Saravanakumar K, Mayer E, de Lusignan S, Greenhalgh T, Delaney BCet al., 2021,

    An Early Warning Risk Prediction Tool (RECAP-V1) for Patients Diagnosed With COVID-19: Protocol for a Statistical Analysis Plan

    , JMIR Research Protocols, Vol: 10, Pages: e30083-e30083

    <jats:sec> <jats:title>Background</jats:title> <jats:p>Since the start of the COVID-19 pandemic, efforts have been made to develop early warning risk scores to help clinicians decide which patient is likely to deteriorate and require hospitalization. The RECAP (Remote COVID-19 Assessment in Primary Care) study investigates the predictive risk of hospitalization, deterioration, and death of patients with confirmed COVID-19, based on a set of parameters chosen through a Delphi process performed by clinicians. We aim to use rich data collected remotely through the use of electronic data templates integrated in the electronic health systems of several general practices across the United Kingdom to construct accurate predictive models. The models will be based on preexisting conditions and monitoring data of a patient’s clinical parameters (eg, blood oxygen saturation) to make reliable predictions as to the patient’s risk of hospital admission, deterioration, and death.</jats:p> </jats:sec> <jats:sec> <jats:title>Objective</jats:title> <jats:p>This statistical analysis plan outlines the statistical methods to build the prediction model to be used in the prioritization of patients in the primary care setting. The statistical analysis plan for the RECAP study includes the development and validation of the RECAP-V1 prediction model as a primary outcome. This prediction model will be adapted as a three-category risk score split into red (high risk), amber (medium risk), and green (low risk) for any patient with suspected COVID-19. The model will predict the risk of deterioration and hospitalization.</jats:p> </jats:sec> <jats:sec> <jats:title>Methods</jats:title> <jats:p>After the data have been collected, we will assess the degree of missingness and use a combination

  • Journal article
    Nurek M, Rayner C, Freyer A, Taylor S, Jaerte L, MacDermott N, Delaney BCet al., 2021,

    Recommendations for the recognition, diagnosis, and management of long COVID: a Delphi study

    , British Journal of General Practice, Vol: 71, Pages: E815-E825, ISSN: 0960-1643

    Background In the absence of research into therapies and care pathways for long COVID, guidance based on ‘emerging experience’ is needed.Aim To provide a rapid expert guide for GPs and long COVID clinical services.Design and setting A Delphi study was conducted with a panel of primary and secondary care doctors.Method Recommendations were generated relating to the investigation and management of long COVID. These were distributed online to a panel of UK doctors (any specialty) with an interest in, lived experience of, and/or experience treating long COVID. Over two rounds of Delphi testing, panellists indicated their agreement with each recommendation (using a five-point Likert scale) and provided comments. Recommendations eliciting a response of ‘strongly agree’, ‘agree’, or ‘neither agree nor disagree’ from 90% or more of responders were taken as showing consensus.Results Thirty-three clinicians representing 14 specialties reached consensus on 35 recommendations. Chiefly, GPs should consider long COVID in the presence of a wide range of presenting features (not limited to fatigue and breathlessness) and exclude differential diagnoses where appropriate. Detailed history and examination with baseline investigations should be conducted in primary care. Indications for further investigation and specific therapies (for myocarditis, postural tachycardia syndrome, mast cell disorder) include hypoxia/desaturation, chest pain, palpitations, and histamine-related symptoms. Rehabilitation should be individualised, with careful activity pacing (to avoid relapse) and multidisciplinary support.Conclusion Long COVID clinics should operate as part of an integrated care system, with GPs playing a key role in the multidisciplinary team. Holistic care pathways, investigation of specific complications, management of potential symptom clusters, and tailored rehabilitation are needed.

  • Conference paper
    Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2020,

    Argumentation as a framework for interactive explanations for recommendations

    , KR 2020, 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI, Pages: 805-815, ISSN: 2334-1033

    As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

  • Journal article
    Simoes Monteiro de Marvao A, McGurk K, Zheng S, Thanaj M, Bai W, Duan J, Biffi C, Mazzarotto F, Statton B, Dawes T, Savioli N, Halliday B, Xu X, Buchan R, Baksi A, Quinlan M, Tokarczuk P, Tayal U, Francis C, Whiffin N, Theotokis A, Zhang X, Jang M, Berry A, Pantazis A, Barton P, Rueckert D, Prasad S, Walsh R, Ho C, Cook S, Ware J, O'Regan Det al., 2021,

    Phenotypic expression and outcomes in individuals with rare genetic variants of hypertrophic cardiomyopathy

    , Journal of the American College of Cardiology, Vol: 78, Pages: 1097-1110, ISSN: 0735-1097

    Background: Hypertrophic cardiomyopathy (HCM) is caused by rare variants in sarcomereencoding genes, but little is known about the clinical significance of these variants in thegeneral population.Objectives: To compare lifetime outcomes and cardiovascular phenotypes according to thepresence of rare variants in sarcomere-encoding genes amongst middle-aged adults.Methods: We analysed whole exome sequencing and cardiac magnetic resonance (CMR)imaging in UK Biobank participants stratified by sarcomere-encoding variant status.Results: The prevalence of rare variants (allele frequency <0.00004) in HCM-associatedsarcomere-encoding genes in 200,584 participants was 2.9% (n=5,712; 1 in 35), and theprevalence of variants pathogenic or likely pathogenic for HCM (SARC-HCM-P/LP) was0.25% (n=493, 1 in 407). SARC-HCM-P/LP variants were associated with increased risk ofdeath or major adverse cardiac events compared to controls (HR 1.69, 95% CI 1.38 to 2.07,p<0.001), mainly due to heart failure endpoints (HR 4.23, 95% CI 3.07 to 5.83, p<0.001). In21,322 participants with CMR, SARC-HCM-P/LP were associated with asymmetric increasein left ventricular maximum wall thickness (10.9±2.7 vs 9.4±1.6 mm, p<0.001) buthypertrophy (≥13mm) was only present in 18.4% (n=9/49, 95% CI 9 to 32%). SARC-HCMP/LP were still associated with heart failure after adjustment for wall thickness (HR 6.74,95% CI 2.43 to 18.7, p<0.001).Conclusions: In this population of middle-aged adults, SARC-HCM-P/LP variants have lowaggregate penetrance for overt HCM but are associated with increased risk of adversecardiovascular outcomes and an attenuated cardiomyopathic phenotype. Although absoluteevent rates are low, identification of these variants may enhance risk stratification beyondfamilial disease.

  • Conference paper
    Kotonya N, Spooner T, Magazzeni D, Toni Fet al., 2021,

    Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification

    , FEVER 2021
  • Conference paper
    Cursi F, Kormushev P, 2021,

    Pre-operative offline optimization of insertion point location for safe and accurate surgical task execution

    , Prague, Czech Republic, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)

    In robotically assisted surgical procedures thesurgical tool is usually inserted in the patient’s body througha small incision, which acts as a constraint for the motionof the robot, known as remote center of Motion (RCM). Thelocation of the insertion point on the patient’s body has hugeeffects on the performances of the surgical robot. In this workwe present an offline pre-operative framework to identify theoptimal insertion point location in order to guarantee accurateand safe surgical task execution. The approach is validatedusing a serial-link manipulator in conjunction with a surgicalrobotic tool to perform a tumor resection task, while avoidingnearby organs. Results show that the framework is capable ofidentifying the best insertion point ensuring high dexterity, hightracking accuracy, and safety in avoiding nearby organs.

  • Conference paper
    Cursi F, Bai W, Kormushev P, 2021,

    Kalibrot: a simple-to-use Matlab package for robot kinematic calibration

    , Prague, Czech Republic, International Conference on Intelligent Robots and Systems (IROS) 2021

    Robot modelling is an essential part to properlyunderstand how a robotic system moves and how to controlit. The kinematic model of a robot is usually obtained byusing Denavit-Hartenberg convention, which relies on a set ofparameters to describe the end-effector pose in a Cartesianspace. These parameters are assigned based on geometricalconsiderations of the robotic structure, however, the assignedvalues may be inaccurate. The purpose of robot kinematiccalibration is therefore to find optimal parameters whichimprove the accuracy of the robot model. In this work wepresent Kalibrot, an open source Matlab package for robotkinematic calibration. Kalibrot has been designed to simplifyrobot calibration and easily assess the calibration results. Besidecomputing the optimal parameters, Kalibrot provides a visualization layer showing the values of the calibrated parameters,what parameters can be identified, and the calibrated roboticstructure. The capabilities of the package are here shownthrough simulated and real world experiments.

  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2021,

    Influence-driven explanations for bayesian network classifiers

    , PRICAI 2021, Publisher: Springer Verlag, ISSN: 0302-9743

    We propose a novel approach to buildinginfluence-driven ex-planations(IDXs) for (discrete) Bayesian network classifiers (BCs). IDXsfeature two main advantages wrt other commonly adopted explanationmethods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing adeep, rather than shallow, account of theBCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count to-wards explanations. Our approach is thusflexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances cap-turing existing approaches. We demonstrate IDXs’ capability to explainvarious forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

  • Conference paper
    Zylberajch H, Lertvittayakumjorn P, Toni F, 2021,

    HILDIF: interactive debugging of NLI models using influence functions

    , 1st Workshop on Interactive Learning for Natural Language Processing (InterNLP), Publisher: ASSOC COMPUTATIONAL LINGUISTICS-ACL, Pages: 1-6

    Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability. One solution to this problem is to include users in the loop and leverage their feedback to improve models. We propose a novel explanatory debugging pipeline called HILDIF, enabling humans to improve deep text classifiers using influence functions as an explanation method. We experiment on the Natural Language Inference (NLI) task, showing that HILDIF can effectively alleviate artifact problems in fine-tuned BERT models and result in increased model generalizability.

  • Conference paper
    Wang K, Saputra RP, Foster JP, Kormushev Pet al., 2021,

    Improved energy efficiency via parallel elastic elements for the straight-legged vertically-compliant robot SLIDER

    , Japan, 24th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines

    Most state-of-the-art bipedal robots are designed to be anthropomorphic, and therefore possess articulated legs with knees. Whilstthis facilitates smoother, human-like locomotion, there are implementation issues that make walking with straight legs difficult. Many robotshave to move with a constant bend in the legs to avoid a singularityoccurring at the knee joints. The actuators must constantly work tomaintain this stance, which can result in the negation of energy-savingtechniques employed. Furthermore, vertical compliance disappears whenthe leg is straight and the robot undergoes high-energy loss events such asimpacts from running and jumping, as the impact force travels throughthe fully extended joints to the hips. In this paper, we attempt to improve energy efficiency in a simple yet effective way: attaching bungeecords as elastic elements in parallel to the legs of a novel, knee-less bipedrobot SLIDER, and show that the robot’s prismatic hip joints preservevertical compliance despite the legs being constantly straight. Due tothe nonlinear dynamics of the bungee cords and various sources of friction, Bayesian Optimization is utilized to find the optimals configurationof bungee cords that achieves the largest reduction in energy consumption. The optimal solution found saves 15% of the energy consumptioncompared to the robot configuration without parallel elastic elements.Additional Video: https://youtu.be/ZTaG9−Dz8A

  • Journal article
    Albini E, Baroni P, Rago A, Toni Fet al., 2021,

    Interpreting and explaining pagerank through argumentation semantics

    , Intelligenza Artificiale, Vol: 15, Pages: 17-34, ISSN: 1724-8035

    In this paper we show how re-interpreting PageRank as an argumentation semantics for a bipolar argumentation framework empowers its explainability. After showing that PageRank, naively re-interpreted as an argumentation semantics for support frameworks, fails to satisfy some generally desirable properties, we propose a novel approach able to reconstruct PageRank as a gradual semantics of a suitably defined bipolar argumentation framework, while satisfying these properties. We then show how the theoretical advantages afforded by this approach also enjoy an enhanced explanatory power: we propose several types of argument-based explanations for PageRank, each of which focuses on different aspects of the algorithm and uncovers information useful for the comprehension of its results.

  • Report
    Paulino-Passos G, Toni F, 2021,

    Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)

    Recently, abstract argumentation-based models of case-based reasoning($AA{\text -} CBR$ in short) have been proposed, originally inspired by thelegal domain, but also applicable as classifiers in different scenarios.However, the formal properties of $AA{\text -} CBR$ as a reasoning systemremain largely unexplored. In this paper, we focus on analysing thenon-monotonicity properties of a regular version of $AA{\text -} CBR$ (that wecall $AA{\text -} CBR_{\succeq}$). Specifically, we prove that $AA{\text -}CBR_{\succeq}$ is not cautiously monotonic, a property frequently considereddesirable in the literature. We then define a variation of $AA{\text -}CBR_{\succeq}$ which is cautiously monotonic. Further, we prove that suchvariation is equivalent to using $AA{\text -} CBR_{\succeq}$ with a restrictedcasebase consisting of all "surprising" and "sufficient" cases in the originalcasebase. As a by-product, we prove that this variation of $AA{\text -}CBR_{\succeq}$ is cumulative, rationally monotonic, and empowers a principledtreatment of noise in "incoherent" casebases. Finally, we illustrate $AA{\text-} CBR$ and cautious monotonicity questions on a case study on the U.S. TradeSecrets domain, a legal casebase.

  • Journal article
    Cabral C, Curtis K, Curcin V, Dominguez J, Prasad V, Schilder A, Turner N, Wilkes S, Taylor J, Gallagher S, Little P, Delaney B, Moore M, Hay AD, Horwood Jet al., 2021,

    Challenges to implementing electronic trial data collection in primary care: a qualitative study

    , BMC FAMILY PRACTICE, Vol: 22
  • Journal article
    Mersmann S, Stromich L, Song F, Wu N, Vianello F, Barahona M, Yaliraki Set al., 2021,

    ProteinLens: a web-based application for the analysis of allosteric signalling on atomistic graphs of biomolecules

    , Nucleic Acids Research, Vol: 49, Pages: W551-W558, ISSN: 0305-1048

    The investigation of allosteric effects in biomolecular structures is of great current interest in diverse areas, from fundamental biological enquiry to drug discovery. Here we present ProteinLens, a user-friendly and interactive web application for the investigation of allosteric signalling based on atomistic graph-theoretical methods. Starting from the PDB file of a biomolecule (or a biomolecular complex) ProteinLens obtains an atomistic, energy-weighted graph description of the structure of the biomolecule, and subsequently provides a systematic analysis of allosteric signalling and communication across the structure using two computationally efficient methods: Markov Transients and bond-to-bond propensities. ProteinLens scores and ranks every bond and residue according to the speed and magnitude of the propagation of fluctuations emanating from any site of choice (e.g. the active site). The results are presented through statistical quantile scores visualised with interactive plots and adjustable 3D structure viewers, which can also be downloaded. ProteinLens thus allows the investigation of signalling in biomolecular structures of interest to aid the detection of allosteric sites and pathways. ProteinLens is implemented in Python/SQL and freely available to use at: www.proteinlens.io.

  • Journal article
    Rago A, Cocarascu O, Bechlivanidis C, Lagnado D, Toni Fet al., 2021,

    Argumentative explanations for interactive recommendations

    , Artificial Intelligence, Vol: 296, Pages: 1-22, ISSN: 0004-3702

    A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.

  • Conference paper
    Laumann F, von Kuegelgen J, Barahona M, 2021,

    Kernel two-sample and independence tests for non-stationary random processes

    , ITISE 2021 (7th International conference on Time Series and Forecasting), Publisher: https://www.mdpi.com/2673-4591/5/1/31, Pages: 1-13

    Two-sample and independence tests with the kernel-based MMD and HSIC haveshown remarkable results on i.i.d. data and stationary random processes.However, these statistics are not directly applicable to non-stationary randomprocesses, a prevalent form of data in many scientific disciplines. In thiswork, we extend the application of MMD and HSIC to non-stationary settings byassuming access to independent realisations of the underlying random process.These realisations - in the form of non-stationary time-series measured on thesame temporal grid - can then be viewed as i.i.d. samples from a multivariateprobability distribution, to which MMD and HSIC can be applied. We further showhow to choose suitable kernels over these high-dimensional spaces by maximisingthe estimated test power with respect to the kernel hyper-parameters. Inexperiments on synthetic data, we demonstrate superior performance of ourproposed approaches in terms of test power when compared to currentstate-of-the-art functional or multivariate two-sample and independence tests.Finally, we employ our methods on a real socio-economic dataset as an exampleapplication.

  • Conference paper
    Cully A, 2021,

    Multi-Emitter MAP-Elites: Improving quality, diversity and convergence speed with heterogeneous sets of emitters

    , Genetic and Evolutionary Computation Conference (GECCO), Publisher: ACM, Pages: 84-92

    Quality-Diversity (QD) optimisation is a new family of learning algorithmsthat aims at generating collections of diverse and high-performing solutions.Among those algorithms, MAP-Elites is a simple yet powerful approach that hasshown promising results in numerous applications. In this paper, we introduce anovel algorithm named Multi-Emitter MAP-Elites (ME-MAP-Elites) that improvesthe quality, diversity and convergence speed of MAP-Elites. It is based on therecently introduced concept of emitters, which are used to drive thealgorithm's exploration according to predefined heuristics. ME-MAP-Elitesleverages the diversity of a heterogeneous set of emitters, in which eachemitter type is designed to improve differently the optimisation process.Moreover, a bandit algorithm is used to dynamically find the best emitter setdepending on the current situation. We evaluate the performance ofME-MAP-Elites on six tasks, ranging from standard optimisation problems (in 100dimensions) to complex locomotion tasks in robotics. Our comparisons againstMAP-Elites and existing approaches using emitters show that ME-MAP-Elites isfaster at providing collections of solutions that are significantly morediverse and higher performing. Moreover, in the rare cases where no fruitfulsynergy can be found between the different emitters, ME-MAP-Elites isequivalent to the best of the compared algorithms.

  • Conference paper
    Rakicevic N, Cully A, Kormushev P, 2021,

    Policy manifold search: exploring the manifold hypothesis for diversity-based neuroevolution

    , Genetic and Evolutionary Computation Conference (GECCO '21), Pages: 901-909

    Neuroevolution is an alternative to gradient-based optimisation that has the potential to avoid local minima and allows parallelisation. The main limiting factor is that usually it does not scale well with parameter space dimensionality. Inspired by recent work examining neural network intrinsic dimension and loss landscapes, we hypothesise that there exists a low-dimensional manifold, embedded in the policy network parameter space, around which a high-density of diverse and useful policies are located. This paper proposes a novel method for diversity-based policy search via Neuroevolution, that leverages learned representations of the policy network parameters, by performing policy search in this learned representation space. Our method relies on the Quality-Diversity (QD) framework which provides a principled approach to policy search, and maintains a collection of diverse policies, used as a dataset for learning policy representations. Further, we use the Jacobian of the inverse-mapping function to guide the search in the representation space. This ensures that the generated samples remain in the high-density regions, after mapping back to the original space. Finally, we evaluate our contributions on four continuous-control tasks in simulated environments, and compare to diversity-based baselines.

  • Journal article
    Saputra RP, Rakicevic N, Kuder I, Bilsdorfer J, Gough A, Dakin A, Cocker ED, Rock S, Harpin R, Kormushev Pet al., 2021,

    ResQbot 2.0: an improved design of a mobile rescue robot with an inflatable neck securing device for safe casualty extraction

    , Applied Sciences, Vol: 11, Pages: 1-18, ISSN: 2076-3417

    Despite the fact that a large number of research studies have been conducted in the field of searchand rescue robotics, significantly little attention has been given to the development of rescue robotscapable of performing physical rescue interventions, including loading and transporting victims toa safe zone—i.e. casualty extraction tasks. The aim of this study is to develop a mobile rescue robotthat could assist first responders when saving casualties from a danger area by performing a casualty extraction procedure, whilst ensuring that no additional injury is caused by the operation andno additional lives are put at risk. In this paper, we present a novel design of ResQbot 2.0—a mobilerescue robot designed for performing the casualty extraction task. This robot is a stretcher-type casualty extraction robot, which is a significantly improved version of the initial proof-of-concept prototype, ResQbot (retrospectively referred to as ResQbot 1.0), that has been developed in our previous work. The proposed designs and development of the mechanical system of ResQbot 2.0, as wellas the method for safely loading a full body casualty onto the robot’s ‘stretcher bed’, are describedin detail based on the conducted literature review, evaluation of our previous work and feedbackprovided by medical professionals. To verify the proposed design and the casualty extraction procedure, we perform simulation experiments in Gazebo physics engine simulator. The simulationresults demonstrate the capability of ResQbot 2.0 to successfully carry out safe casualty extractions

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wwwtest.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=2&respub-action=search.html Current Millis: 1759535525273 Current Time: Sat Oct 04 00:52:05 BST 2025