Data Engineer
 
        Data Engineer
India - Hyderabad Apply NowJoin Amgen’s Mission of Serving Patients
At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do.
Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career.
Data Engineer, Manufacturing Data & Analytics
What you will do:
Let’s do this. Let’s change the world. We are looking for a highly motivated expert Data Engineer to design and develop advanced data pipelines and solutions for our Manufacturing Applications Product Team. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics for Manufacturing and Operations use cases. This role requires deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management.
- Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets. 
- Build highly efficient data pipelines to migrate and deploy complex data across systems, with an understanding of biotech/pharma/manufacturing or related domains. 
- Design and implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments. 
- Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB, etc.), APIs, logs, event streams, images, PDFs, and third-party platforms. 
- Ensure data integrity, accuracy, and consistency through rigorous quality checks and monitoring. 
- Innovate, explore, and implement new tools and technologies to enhance efficient data processing. 
- Stay updated with the latest design trends and techniques to ensure the best data engineering. 
- Proactively identify and implement opportunities to automate tasks and develop reusable frameworks. 
- Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value. 
- Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. 
- Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle. 
- Collaborate and communicate effectively with product teams and cross-functional teams to understand business requirements and translate them into technical solutions. 
What we expect of you:
We are all different, yet we all use our unique contributions to serve patients.
Basic Qualifications:
- Any degree and 5 to 9 years of Computer Science, IT or related field experience 
Must-Have Skills
- Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL, Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. 
- Proficiency in workflow orchestration and performance tuning on big data processing. 
- Strong understanding of AWS services. 
- Ability to quickly learn, adapt, and apply new technologies. 
- Strong problem-solving and analytical skills. 
- Excellent communication and teamwork skills. 
- Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. 
Good-to-Have Skills
- Experience with AI assisted code development using tools like GitHub Copilot, Cursor, Claude Code. 
- Data engineering experience in biotechnology or pharma industry. 
- Experience in writing APIs to make data available to consumers. 
- Experience with SQL/NoSQL databases, vector databases for large language models. 
- Experience with data modeling and performance tuning for both OLAP and OLTP databases. 
- Experience with software engineering best practices, including version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven, etc.), automated unit testing, and DevOps. 
- Experience with manufacturing related data sources like SCADA, Data Historians is a plus 
Education and Professional Certifications
- AWS Certified Data Engineer preferred 
- Databricks Certificate preferred 
- Scaled Agile SAFe certification preferred 
Soft Skills
- Excellent analytical and troubleshooting skills. 
- Strong verbal and written communication skills. 
- Ability to work effectively with global, virtual teams. 
- High degree of initiative and self-motivation. 
- Ability to manage multiple priorities successfully. 
- Team-oriented, with a focus on achieving team goals. 
- Ability to learn quickly, be organized, and detail-oriented. 
- Strong presentation and public speaking skills. 
What you can expect of us
As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support our journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.
 
		     
                     
                    