Swain BP, Nag DS, Anand R, Kumar H, Ganguly PK, Singh N. Current evidence on artificial intelligence in regional anesthesia. World J Clin Cases 2024; 12(33): 6613-6619 [PMID: 39600473 DOI: 10.12998/wjcc.v12.i33.6613]
Corresponding Author of This Article
Deb Sanjay Nag, MD, Doctor, Department of Anaesthesiology, Tata Main Hospital, C Road West, Northern Town, Bistupur, Jamshedpur 831001, India. ds.nag@tatasteel.com
Research Domain of This Article
Anesthesiology
Article-Type of This Article
Minireviews
Open-Access Policy of This Article
This article is an open-access article which was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Bhanu Pratap Swain, Deb Sanjay Nag, Rishi Anand, Himanshu Kumar, Pradip Kumar Ganguly, Niharika Singh, Department of Anaesthesiology, Tata Main Hospital, Jamshedpur 831001, India
Bhanu Pratap Swain, Rishi Anand, Himanshu Kumar, Department of Anesthesiology, Manipal Tata Medical College, Jamshedpur 831017, India
Co-first authors: Bhanu Pratap Swain and Deb Sanjay Nag.
Author contributions: Swain BP, Nag DS, and Anand R designed the overall concept and outline of the manuscript; Kumar H, Ganguly PG, and Singh N contributed to the discussion and design of the manuscript; Swain BP, Nag DS, Anand R, Kumar H, Ganguly PK, and Singh N contributed to the writing, and editing the manuscript and review of literature; all of the authors read and approved the final version of the manuscript to be published.
Conflict-of-interest statement: All authors declare no conflict of interest in publishing the manuscript.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Deb Sanjay Nag, MD, Doctor, Department of Anaesthesiology, Tata Main Hospital, C Road West, Northern Town, Bistupur, Jamshedpur 831001, India. ds.nag@tatasteel.com
Received: June 18, 2024 Revised: September 11, 2024 Accepted: September 19, 2024 Published online: November 26, 2024 Processing time: 101 Days and 1.7 Hours
Abstract
The recent advancement in regional anesthesia (RA) has been largely attributed to ultrasound technology. However, the safety and efficiency of ultrasound-guided nerve blocks depend upon the skill and experience of the performer. Even with adequate training, experience, and knowledge, human-related limitations such as fatigue, failure to recognize the correct anatomical structure, and unintentional needle or probe movement can hinder the overall effectiveness of RA. The amalgamation of artificial intelligence (AI) to RA practice has promised to override these human limitations. Machine learning, an integral part of AI can improve its performance through continuous learning and experience, like the human brain. It enables computers to recognize images and patterns specifically useful in anatomic structure identification during the performance of RA. AI can provide real-time guidance to clinicians by highlighting important anatomical structures on ultrasound images, and it can also assist in needle tracking and accurate deposition of local anesthetics. The future of RA with AI integration appears promising, yet obstacles such as device malfunction, data privacy, regulatory barriers, and cost concerns can deter its clinical implementation. The current mini review deliberates the current application, future direction, and barrier to the application of AI in RA practice.
Core Tip: Proficiency in ultrasound-guided regional anesthesia (UGRA) demands an accurate interpretation of sono-anatomy and precise delivery of local anesthetics in the intended location by maneuvering a block needle. Integration of artificial intelligence (AI) can make the job of clinicians a lot easier by deciphering the correct anatomy and providing real-time needle guidance. It promises to improve the success of the UGRA procedures and reduce the complication rate by minimizing human error. Furthermore, AI can be a great tool in education and training. It can help the trainees to learn regional anesthesia techniques faster and more efficiently. Although the future looks promising, the full integration of AI in clinical practice needs user validation and ample data on clinical outcomes.
Citation: Swain BP, Nag DS, Anand R, Kumar H, Ganguly PK, Singh N. Current evidence on artificial intelligence in regional anesthesia. World J Clin Cases 2024; 12(33): 6613-6619
Before the advent of ultrasound in regional anesthesia (RA), peripheral nerve blocks (PNBs) were traditionally practiced with landmark-based techniques for anatomic guidance[1]. Ultrasound guidance has transformed the practice of RA, making it safer and more successful. With the ability to visualize nerves, muscle planes, blood vessels, and needles in real time, ultrasound-guided RA (UGRA) provides exceptional accuracy and precision[1]. However, mastering the skill of UGRA is not an easy feat. It requires extensive training, a deep understanding of sono-anatomy, and good hand-eye coordination. The learning curve can be steep, and trainees often struggle with ultrasound image acquisition and needle tracking[2]. These challenges are especially pronounced in cases involving obese patients, those with tissue edema, and deep-seated nerve plexuses[3]. Recent developments in echogenic needles and needle guidance systems have improved needle visibility[4]. Despite these advancements, there is still limited development in improving the quality of ultrasound images and their interpretation. With artificial intelligence (AI) integration, we can overcome these challenges and improve the safety and success of UGRA procedures.
AI is revolutionizing the medical field, providing clinicians with crucial support to deliver swift, effective treatment to their patients[5]. With AI technology, machines and computers can mimic human intelligence and problem-solving abilities, overcoming common human limitations such as fatigue and loss of attention. AI can interpret things that the human eye may struggle to decipher or take a long time to do so.
In RA, AI-guided solutions can play a critical role in enhancing the success rate of procedures and minimizing complications. AI can assist clinicians with ultrasound image interpretation by highlighting relevant anatomical structures and providing guidance in needle insertion, advancement, and injection of local anesthetics[6]. It can also facilitate the training of novice clinicians in UGRA techniques, allowing them to acquire the required skills quickly and efficiently, reducing the learning curve associated with the procedure[7].
The use of AI technology for ultrasound-guided PNBs and neuraxial blocks has enormous potential. Nonetheless, before implementing these AI models in clinical settings, they must undergo real-case scenario studies to ensure their validity, utility, and safety.
FUNDAMENTALS OF AI
AI technology is based on two fundamental concepts-machine learning (ML) and deep learning (DL). ML involves training computers to learn from data and algorithms, improving their performance over time. It requires exposure to training data and algorithms to identify patterns and improve decision-making. On the other hand, DL is a subset of ML that uses multi-layered neural networks, called deep neural networks, to perform complex decision-making. These networks are structured similarly to the human brain, enabling them to identify patterns and relationships, recognize phenomena, evaluate different possibilities, and make predictions and decisions. A convolutional neural network (CNN) is a type of DL algorithm that is particularly useful in tasks that involve recognizing and processing images. The architecture of CNN is based on the way the human brain processes visual information. By training on large amounts of data, CNN can learn to recognize patterns and features that are associated with specific objects or classes. This makes them especially valuable in the field of medical imaging, where CNN can help clinicians quickly and accurately identify abnormalities[8]. Although AI is revolutionizing healthcare, there are challenges to overcome before we realize its full potential. The availability of quality medical data in accordance with data security and privacy concerns would be critical to achieving the desired outcomes. Although AI-guided clinical therapy may appear evidence-based and realistic, hallucinations can create a false narrative necessitating human involvement for appropriate care delivery[8].
Scope of AI in ultrasound guided RA
Performing RA with the help of ultrasound guidance is a complex multitasking process that involves many steps. These steps include placing the ultrasound probe in the area of interest, acquiring the correct image, deciding where to insert the needle, inserting the needle correctly, adjusting the ultrasound probe to see the needle tip, administering the drug, checking that the anesthetic is spreading correctly, monitoring for complications, and finally assessing the effectiveness of the block. To carry out these tasks, practitioners should have a solid understanding of anatomy and good hand-eye coordination. For novice practitioners and trainees, it can be overwhelming. AI can now assist or supervise all these steps using information from clinicians, patients, and procedures. Some of the current AI-assisted technologies used in UGRA are real-time anatomic guidance, target detection, and needle tracking[7].
AI-assisted real time anatomic guidance in UGRA
Ultrasound images are currently available in two dimensions (2D) and grayscale. Anatomical structures appear in varying shades of grey based on the tissue's echogenicity. Moreover, the human interpretation of the greyscale ultrasound images is nonintuitive and limited by experience. AI can overcome this challenge by highlighting the anatomical structures and providing a visual cue to the operator, making it easier to focus on the intended structure. One of the most common AI technologies used to process ultrasound images is segmentation, where color overlays of various sono-anatomical features are used to differentiate relevant anatomic structures[9-13]. Other less common methods include using bounding boxes around the intended structure, displaying expanding circular rings from the center of the nerve plexus, and putting name tags over the structure in question[14,15].
AI can also help the operator in getting the best ultrasound image of the intended structures during the ultrasound scanning phase of the PNBs. Some known methodologies include displaying successful scan indicators[13,16], specifying optimal vs non-optimal images[17], creating a three-dimensional (3D) model of the scanned image[18], and guidance in probe[10] manipulation to get the ideal image. AI solutions are also available for ultrasound-guided central neuraxial blocks that can significantly improve the first-pass success rate and reduce complications. Common techniques include identifying the vertebral level, outlining the distance from the skin to the posterior complex or epidural space, structure identification by segmentation or bounding boxes, and marking the optimal needle insertion point[19-22].
AI models for PNBs
Currently, there are few commercially available AI solutions for ultrasound guided PNBs. One of the most studied models is the “ScanNav Anatomy Peripheral Nerve Block” (manufactured by Intelligent Ultrasound, Cardiff, United Kingdom)[23]. It helps in image interpretation by providing a color overlay of key sono-anatomical structures in 10 commonly performed PNBs (interscalene, supraclavicular, superior trunk, axillary, erector spinae, suprainguinal fascia-iliaca, femoral, adductor canal, and popliteal nerve block). This technology uses CNN, an advanced DL based on U-Net architecture to delineate anatomical structures in real-time ultrasound. A database of 80000 ultrasound images is used by the algorithm as a reference for labeling the sono-anatomical structures in the view. The ScanNav technology has recently undergone many validation studies in clinical settings involving experts, non-experts, and trainees[6,9,11,24,25]. All the studies have concluded that ScanNav AI-based technologies have tremendous potential to help non-experts and experts in RA practice.
NerveBlox, manufactured by Smart Alfa Teknoloji San. Ve Tic. A.S. in Ankara, Turkey, is an AI-powered solution for PNBs similar to ScanNav. This device utilizes real-time ultrasound images and a CNN algorithm to label (color overlay) important anatomical landmarks like muscles, nerves, bones, and blood vessels. It supports 12 types of nerve blocks such as inter-scalene brachial plexus block, superficial cervical plexus block, supraclavicular brachial plexus block, infraclavicular brachial plexus block, axillary brachial plexus block, pectoralis nerve blocks, transversus abdominis plane block, erector spinae plane block, rectus sheath block, femoral nerve block, adductor canal block, and popliteal sciatic nerve block[26]. A few studies have validated Nerveblox AI software as a handy tool for real-time decision-making during PNBs procedures[13,15,16]. Both Nerveblox and ScanNav are external devices with a separate display screen that needs to be plugged into a compatible ultrasound machine to function.
Some of the other commercially available AI devices for PNBs are cNerve (GE Healthcare, Chicago, IL, United States)[27], NerveTrack (Samsung, Suwon, South Korea)[28], and Smart Nerve (Mindray, Shenzhen, China)[29].
AI models for central neuraxial block
There are two AI systems available that can assist with central neuraxial block (CNB) procedures. One of these is Accuro, a handheld device manufactured by Rivanna Medical (Charlottesville, VA, United States)[30]. This device uses SpineNav3D AI-Based Spine Recognition technology to automatically detect bony landmarks and provide a real-time 3D view of the scan plane. It accurately determines the depth of the epidural space and offers real-time tracking of the needle[30]. Studies have shown that Accuro can reduce the number of needle redirects and increase the chances of first-pass success[22,31-34]. The other AI-enabled software is uSINE, manufactured by HiCura Medical (Singapore). It is designed for automatic identification of spinal level and needle guidance during ultrasound guided CNBs[35]. USINE is an external device that must be connected to a compatible ultrasound machine. The software guides the operator in achieving an ideal ultrasound view. Once the desired view is obtained, the operator marks the skin puncture site and then performs the neuraxial procedure by introducing the needle through the marked site. However, real-time needle tracking is not available with this software. One study reported a 92% first attempt of dural puncture success during spinal anesthesia in 100 obstetric patients when uSINE was used to identify the surface landmark[36].
Role of AI in training
Mastering the safe performance of RA can be a challenging task for trainees due to the steep learning curve involved. However, the advent of AI-assisted technologies has paved the way for an easier and more efficient training process. With multiple AI-integrated devices available on the market, trainees can now easily overcome the difficulties they face while learning RA. These devices can highlight key anatomical structures and reconstruct 3D anatomy from 2D ultrasound images, making it easier for trainees to visualize 3D anatomy. Augmented reality (AR) has revolutionized simulation-based training, enabling trainees to improve hand-eye coordination, positioning, and dexterity issues during needle manipulation[37,38]. Devices like Needle Trainer™, manufactured by Intelligent Ultrasound (Cardiff, United Kingdom), provide a non-invasive simulation of needle manipulation on a live participant using a retractable needle and a superimposed virtual image on the ultrasound scan[39,40]. The AR experience can be further enhanced by providing haptic feedback to the operator, which gives the feel of a real needle and tissue. The Regional Anesthesia Simulator and Assistant is another AI-based simulator that offers a virtual reality environment with haptic feedback for needle manipulation in RA procedures[41]. With such advanced technology at their disposal, trainees can now learn RA more efficiently and safely, without compromising on patient safety.
Furthermore, real-time feedback during training can significantly improve the learning process of RA. Recently, eye-tracking technology has been studied in UGRA to assess visual attention to relevant anatomical structures and decision-making between experts and novices[42,43]. The study found that experts tend to focus more on clinically relevant sono-anatomical features while novices struggle to do so. By integrating AI with eye-tracking technology, trainees can be reminded to focus during live procedures, which can accelerate their learning process. Similarly, a study was conducted on hand motion analysis during the performance of UGRA by both experts and trainees. The imperial college surgical assessment device, which is a validated measure of hand movement during surgical training, was used to monitor hand movement during the scanning and needling phases of UGRA. The study concluded that experts performed significantly better than the trainees[44]. A recent study used Onvision needle tip tracking technology (B Braun Melsungen AG, Melsungen, Germany) to analyze procedure time, hand movement, and path length during simulated ultrasound-guided PNBs procedures in a porcine phantom model[45]. The study showed a significant reduction in procedure time and hand movement for out-of-plane techniques when this new technology was used compared to when it was not. A similar study on human volunteers for ultrasound-guided lumbar plexus block demonstrated reduced hand movements and path lengths[46]. Hence, AI technology can utilize the data from these devices to improve trainee performance during training.
Table 1 summarizes the currently available commercial devices for PNBs and central neuraxial blocks[23,26-30,35,40,45]. Realtime analysis of ultrasound images through AI algorithms by highlighting key anatomical structures can enhance their visualization and minimize the risk of accidental puncture of key vital structures. AI guided precise needle placement can potentially lead to higher rates of successful blocks, enhanced pain management and faster recovery. Similarly fewer complications like hematoma, nerve damage and infection would also translate into better outcomes.
Onvision accurately indicates the needle tip position inside the body both in and out of the ultrasound viewing plane
Exclusively works with Philips Xperius ultrasound system
Constraints in integration of AI in RA
AI has displayed remarkable potential in the field of RA and is growing at an accelerated pace. However, the complete implementation of AI in clinical practice has several challenges.
Currently, numerous AI-enabled devices or models are available, but only a few have undergone external validation. The paucity of data on patient outcomes utilizing these devices and the unstructured and heterogeneous literature available are further challenges. Besides, the absence of standards to verify the accuracy and utility of these devices constitutes a significant impediment to their incorporation into clinical practice[47].
Effective ML relies on thorough training with vast and diverse datasets. However, the availability and quality of patient data remain a significant concern that can hinder the advancement of AI. Transfer learning emerges as a valuable method to tackle this challenge by leveraging knowledge gained from one domain and applying it to another[48]. Furthermore, if the input data is flawed, the AI model's output will contain inaccuracies, potentially posing a serious impact on the efficacy and safety of medical procedures[49]. Additionally, the ethical and legal aspects, along with privacy concerns stemming from such data, need to be carefully addressed. The Alan Turing Institute, a leading center for data science and AI, has developed comprehensive guidelines for ethics and safety in this domain and needs to be referred to whenever dealing with ethical matters related to patient data management[50].
Though AI-assisted devices possess tremendous abilities, they may fail, just like any other technology. Misidentification of anatomy and the AI model's emphasis on the wrong features may lead to undesired outcomes. The accuracy of AI in specific populations, such as pediatric age groups, geriatric populations, and patients with altered anatomy, is not well-studied[7]. Therefore, over-reliance on AI assistance has downsides, particularly with trainees and learners who lack confidence in their knowledge of anatomy. A thorough knowledge of relevant anatomy is essential. It is advisable not to replace traditional expert mentoring with AI-assisted devices, and trainees should learn standard techniques of ultrasound image acquisition before exposure to AI-assisted devices.
The implementation of AI is further hindered by the high costs associated with it. However, as the usage of this technology becomes more widespread, the costs are expected to decrease gradually, making it more affordable and accessible to all in the long run.
AI in RA represents a multidisciplinary field that thrives on collaboration between clinicians, engineers, and administrators. It's imperative to align these disciplines to ensure the successful integration of AI in RA. Regulatory authorities should play a pivotal role in establishing policies that promote the ethical sharing of data across disciplines and institutes. This approach will not only foster the rapid advancement of AI technology but also accelerate its seamless integration into clinical anesthesia practice.
These challenges can only be overcome by collaborating and data-sharing between healthcare institutes and research centers by ensuring the availability of diverse yet representative datasets. Rigorous validation in various clinical settings across geographic locations would help standardize and benchmark evaluation metrics to compare different models. A multidisciplinary approach involving industry, clinicians, researchers, regulatory bodies, and experts in biomedical ethics is the key to unlocking the complete potential of AI in RA.
Being a relatively new development with significant potential, AI models are slowly getting integrated into commercially available ultrasound machines for PNBs and central neuraxial blocks. It can potentially shorten procedure times and ensure financial benefits. Similarly, precise needle placement and drug deposition can potentially result in reduced complications, faster recovery, and reduced healthcare costs. However, with widespread adoption and value realization, the cost of hardware and software for AI in RA is expected to steadily decrease over the next few years.
CONCLUSION
Initial reports suggest that AI applications have tremendous potential to be a game-changer in RA. Currently, several AI models are available for peripheral and central neuraxial blocks. Most of these models use deep neural networks trained on large amounts of data to assist clinicians in real-time decision-making by aiding in image interpretation and needle tracking. Additionally, AI can revolutionize education and training in RA, making simulation-based training more pragmatic and intuitive, thus enabling trainees to learn faster. However, some barriers still exist in its assimilation into clinical practice. The common impediments include insufficient patient data on the use of these devices, lack of standardization, and ethical concerns. Clinicians, scientists, and policymakers must coordinate and standardize the technology to overcome these barriers.
In conclusion, a new era of AI-assisted RA has begun. Anaesthesiologists should embrace this new technology and enhance their skills, reduce the learning curve, and provide better patient care by leveraging AI-guided solutions for RA.
Footnotes
Provenance and peer review: Invited article; Externally peer reviewed.
Peer-review model: Single blind
Corresponding Author's Membership in Professional Societies: Indian Society of Anaesthesiology, No. S2863.
Specialty type: Medicine, research and experimental
Country of origin: India
Peer-review report’s classification
Scientific Quality: Grade A, Grade A
Novelty: Grade B, Grade B
Creativity or Innovation: Grade B, Grade B
Scientific Significance: Grade A, Grade B
P-Reviewer: Liang P; Qin XR S-Editor: Luo ML L-Editor: A P-Editor: Zheng XM
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice.BMC Med Educ. 2023;23:689.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 6][Cited by in F6Publishing: 311][Article Influence: 155.5][Reference Citation Analysis (0)]
Gungor I, Gunaydin B, Oktar SO, M Buyukgebiz B, Bagcaz S, Ozdemir MG, Inan G. A real-time anatomy ıdentification via tool based on artificial ıntelligence for ultrasound-guided peripheral nerve block procedures: an accuracy study.J Anesth. 2021;35:591-594.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 4][Cited by in F6Publishing: 13][Article Influence: 3.3][Reference Citation Analysis (0)]
Ni X, Li MZ, Zhou SQ, Xu ZD, Zhang YQ, Yu YB, Su J, Zhang LM, Liu ZQ. Accuro ultrasound-based system with computer-aided image interpretation compared to traditional palpation technique for neuraxial anesthesia placement in obese parturients undergoing cesarean delivery: a randomized controlled trial.J Anesth. 2021;35:475-482.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 3][Cited by in F6Publishing: 8][Article Influence: 2.0][Reference Citation Analysis (0)]
Bowness JS, Burckett-St Laurent D, Hernandez N, Keane PA, Lobo C, Margetts S, Moka E, Pawa A, Rosenblatt M, Sleep N, Taylor A, Woodworth G, Vasalauskaite A, Noble JA, Higham H. Assistive artificial intelligence for ultrasound image interpretation in regional anaesthesia: an external validation study.Br J Anaesth. 2023;130:217-225.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 11][Cited by in F6Publishing: 33][Article Influence: 16.5][Reference Citation Analysis (0)]
Bowness JS, Macfarlane AJR, Burckett-St Laurent D, Harris C, Margetts S, Morecroft M, Phillips D, Rees T, Sleep N, Vasalauskaite A, West S, Noble JA, Higham H. Evaluation of the impact of assistive artificial intelligence on ultrasound scanning for regional anaesthesia.Br J Anaesth. 2023;130:226-233.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 1][Cited by in F6Publishing: 22][Article Influence: 11.0][Reference Citation Analysis (0)]
Kimizuka M, Tokinaga Y, Taguchi M, Takahashi K, Yamakage M. Usefulness and accuracy of a handheld ultrasound device for epidurssal landmark and depth assessment by anesthesiology residents.J Anesth. 2022;36:693-697.
[PubMed] [DOI][Cited in This Article: ][Reference Citation Analysis (0)]
Shevlin SP, Turbitt L, Burckett-St Laurent D, Macfarlane AJ, West S, Bowness JS. Augmented Reality in Ultrasound-Guided Regional Anaesthesia: An Exploratory Study on Models With Potential Implications for Training.Cureus. 2023;15:e42346.
[PubMed] [DOI][Cited in This Article: ][Reference Citation Analysis (0)]
Deserno M, D’Oliveira J, Grottke O.
Regional Anaesthesia Simulator and Assistant (RASimAs): Medical Image Processing Supporting Anaesthesiologists in Training and Performance of Local Blocks. In: Proceedings of IEEE 28th International Symposium on Computer-Based Medical Systems: IEEE 28th International Symposium on Computer-Based Medical Systems (CBMS 2015); 2015 June22-25, Sao Paulo, Brazil. 2015: 348-351. Available from: https://d.wanfangdata.com.cn/conference/ChZDb25mZXJlbmNlTmV3UzIwMjQwMTA5EiBhYTBmNzYyN2FkZjBiODI5NWRhZjYxYjI2YjFkNjY0YhoIdnBnNWNsZGY%3D.
[PubMed] [DOI][Cited in This Article: ]
Harrison TK, Kim TE, Kou A, Shum C, Mariano ER, Howard SK; ADAPT (Anesthesiology-Directed Advanced Procedural Training) Research Group. Feasibility of eye-tracking technology to quantify expertise in ultrasound-guided regional anesthesia.J Anesth. 2016;30:530-533.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 21][Cited by in F6Publishing: 19][Article Influence: 2.1][Reference Citation Analysis (0)]
Borg LK, Harrison TK, Kou A, Mariano ER, Udani AD, Kim TE, Shum C, Howard SK; ADAPT (Anesthesiology-Directed Advanced Procedural Training) Research Group. Preliminary Experience Using Eye-Tracking Technology to Differentiate Novice and Expert Image Interpretation for Ultrasound-Guided Regional Anesthesia.J Ultrasound Med. 2018;37:329-336.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 23][Cited by in F6Publishing: 17][Article Influence: 2.4][Reference Citation Analysis (0)]
Kåsine T, Romundstad L, Rosseland LA, Ullensvang K, Fagerland MW, Kessler P, Bjørnå E, Sauter AR. The effect of needle tip tracking on procedural time of ultrasound-guided lumbar plexus block: a randomised controlled trial.Anaesthesia. 2020;75:72-79.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 9][Cited by in F6Publishing: 9][Article Influence: 1.5][Reference Citation Analysis (0)]
Bowness JS, Metcalfe D, El-Boghdadly K, Thurley N, Morecroft M, Hartley T, Krawczyk J, Noble JA, Higham H. Artificial intelligence for ultrasound scanning in regional anaesthesia: a scoping review of the evidence from multiple disciplines.Br J Anaesth. 2024;132:1049-1062.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 6][Cited by in F6Publishing: 4][Article Influence: 4.0][Reference Citation Analysis (0)]