Toward Persistent and Reusable Reasoning in Large Language Models – Internship
Toward Persistent and Reusable Reasoning in Large Language Models – Internship
Cambridge
AstraZeneca is a global, science-led biopharmaceutical business and its innovative medicines are used by millions of patients worldwide. AstraZeneca Summer Internships introduce you to the world of ground-breaking drug development, embedding you in highly dedicatedteams,committed to delivering life-changing medicines to patients. Our 10–12-week program is designed for undergraduate, master's, and doctoral students. We offer exciting opportunities across Research & Development, Operations, and Enabling Units (Corporate functions).
Our internships immerse students in the pharmaceutical industry, allowing the opportunitytocontributeto our diverse pipeline of medicines whether in the lab or outside of it. You will feel trusted and empoweredtotakeonnewchallenges, but with all the help and guidanceyouneedtosucceed. This internship will help you developessential skills, expand your knowledge, and build a network that will set you up for future success. You will be surrounded by curious,passionate, and open-minded professionals eager to learn and follow the science, fostering your growth in a truly collaborative and globalteam
Introduction to Role
Join our DataScience&AI team at AstraZeneca, where we are pushing the boundaries of AI-enabled drug discovery and clinical decision support. This internship sits within the Centre for AI, working at the intersection of foundation model research and production AI systems deployed across our R&D pipeline.
You will work on acutting-edge research project investigating how to make AI reasoning more efficient and reusable, a critical capability for long-running scientific analyses, clinical workflows, and multi-step drug discovery tasks. This work has direct applications to our agentic AI initiatives, where models must maintain reasoning context across multiple sessions, tools, and stakeholders.
The project explores novel approaches to managing reasoning states in largelanguagemodels, enabling AI systems to checkpoint their reasoning progress and resume seamlessly. Your work will help determine how AI reasoning can be stored efficiently while preserving the ability to continue complex analytical workflows, and whether such approaches can work across different model architectures.
This is an opportunitytocontributeto foundational AI research with immediate practical impact on how we deploy intelligent systems in healthcare and life sciences.
Accountabilities
As part of this internship, you will:
Design and implement compression architectures that operate on latent representations from largelanguagemodels, using attention-based mechanisms and information-theoretic principles
Conduct experiments to characterize efficiency-reliability trade-offs in reasoning systems, measuring how different compression strategies affect task performance
Evaluate approaches on reasoning benchmarks with verifiable outputs (e.g., mathematical, logical, or scientific reasoning tasks), focusing on functional capability preservation
Document findings through clear experimental reports, visualization of performance trade-offs, and recommendations for practical deployment scenarios
Collaborate with AI researchers to understand deployment requirements for persistent reasoning in production healthcare AI systems
Present results to stakeholders in DataScience&AI, translating technical findings into practical guidance for system design
Essential Criteria
Currently undertaking a PhD with focus in Machine Learning, Computer Science, AI, or related quantitative field
Experience with deep learning frameworks and implementing transformer-based architectures
Strong understanding of representation learning concepts and familiarity with attention mechanisms
Practical experience with training neural networks, including debugging, hyperparameter tuning, and experiment tracking
Programming proficiencyin Python with experience in scientific computing libraries (NumPy, etc.)
Ability to read and implement methods from recent ML research papers
Strong analytical skills with ability to design experiments, interpret results, and identify patterns in model behavior
Desirable
Familiarity with information theory concepts (mutual information, compression principles)
Experience with largelanguagemodels (fine-tuning, inference, or working with model internals)
Knowledge of latent variable models or variational methods
Publications or coursework in deep learning or NLP
AstraZeneca is where you can immerse yourself in groundbreaking work with real patient impact.
Trusted to work on important projects, you’ll have the independence totakeonnewchallenges while receiving all the guidanceyouneedtosucceed. Our collaborativeenvironment is designed to help you growprofessionally and personally, surrounded by passionate individuals eager to makeadifference.
Our mission is to build an inclusive and equitable environment. We want people to feel they belong at AstraZeneca, starting with the recruitment process. Wewelcome and consider applications from all qualified candidates, regardless of characteristics.
We offer reasonableadjustments/accommodations to help all candidates to perform at their best. If you have a need for any reasonableadjustments/accommodations, please complete the section in the application form.
Ready to make an impact? Apply now and join us on this excitingjourney!
#Earlytalent
Date Posted
26-Jan-2026Closing Date
09-Feb-2026Our mission is to build an inclusive and equitable environment. We want people to feel they belong at AstraZeneca and Alexion, starting with our recruitment process. We welcome and consider applications from all qualified candidates, regardless of characteristics. We offer reasonable adjustments/accommodations to help all candidates to perform at their best. If you have a need for any adjustments/accommodations, please complete the section in the application form.Join our Talent Network
Be the first to receive job updates and news from AstraZeneca
Sign up