AIReport
AIReport
AIReport
user user
Quick Submit
Assignment
economy
International University
Document Details
Submission ID
trn:oid:::1:3056578430 4 Pages
Download Date
File Name
userfile
File Size
16.5 KB
The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.
Detection Groups
1 AI-generated only 0%
Likely AI-generated text from a large-language model.
Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.
False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.
AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).
The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.
Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.
General Engineering
Name of Student
Institution Affiliation
Instructor’s Name
Date of Submission
Introduction
One day, I was watching a documentary on hip replacement surgery, which was a turning
point in my career. This trend was shocking when I learned that healthcare practitioners were
subjected to over 5,000 X-ray exposures in a year, which negatively affected both the
practitioner and the precision of the manipulations and prompted me to design efficient medical
solutions (Briggs, Kalra & Panacek, 2022). After reviewing the literature on MR technology and
finding out that the integration of MR systems and deep learning resulted in highly accurate
virtual 3D models, I realized that the use of MR and its virtual models allowed for a massive
decrease in using X-ray machines and, therefore, less exposure to radiation for healthcare staff.
While at the Industrial Technology Research Institute, I worked on computer vision projects
and got firsthand experience with issues related to medical MRs. Using YOLOv7 deep learning
models, I designed the probe recognition and calibration module that obtained a 98% accuracy
for different probes. This experience is especially relevant to medical imaging as one of the core
aspects of designing new surgical guidance systems is establishing accurate object localization
and spatial realignment. Seeing how the technology has been commercialized motivated me to
Research Experience
The research process has been associated with many technical difficulties that helped me to
restaurant, there were issues concerning lighting that vastly affected the application. By
deploying the corresponding changes in CLAHE to enhance the contrast of specific facial
regions and fine-tuning the parameters of the ArcFace mechanism, we significantly improved the
dependability of feature detection. These issues made me realize the need to embrace practical
solutions in solving real problems, especially in the medical MR systems, since the environment
may differ. During my time in the lab under the supervision of Professor Su, I expanded my
knowledge by creating a Unity-based MR STEAM project via Vuforia. In this project, which
received 3rd place in a contest held on campus, knowledge of how carefully designed interface
application.
At the University of Michigan, I intend to further develop myself through courses such as
“Advanced Topics in Computer Vision” and “Extended Reality for Social Impact.” Of the
projects at the university, I am most enticed by the Information Interaction Lab led by Professor
Michael Nebeling, as well as the pioneering Mixed Reality Analytics Toolkit (MRAT). It is
following way. The proposed academic program course load will enable me to gain a deeper
understanding and proficiency in various fields, such as 3D scanning and image processing, as
well as critical technologies, including but not limited to Structure from Motion (SfM) and
SLAM.
holographic imaging reconstructions to achieve new surgical procedures. This system will help
to minimize radiation exposure and, at the same time, increase the accuracy of surgeries by
incorporating computer vision and deep learning algorithms. Given my wide-ranging technical
generating agile calibration for real-time surgery navigation and esoteric solutions for
the FDA approval process and move this technology to the clinical utility level, whereby greater,
safer, and more accurate surgical procedures may be made available to healthcare practitioners
globally.
Reference
Briggs, B., Kalra, S., & Panacek, E. (2022). Risk of Radiation Exposure to Emergency