Data Engineer

Data Engineer
India - Hyderabad APLICAR AHORA
ID de la oferta R-221010
País:
India - Hyderabad
Estado:
On Site
Fecha de publicación Sep. 30, 2025
CATEGORÍA DE EMPLEO: Information Systems
As a Data Engineer at Amgen, you will be responsible for designing, building, and maintaining the company's data infrastructure and systems. You will collaborate with cross-functional teams to understand data requirements, develop data pipelines, implement data integration processes, and ensure data quality and integrity. Your expertise in data modeling, ETL processes, and database technologies will contribute to the effective management and utilization of data for insights and decision-making.
Roles & Responsibilities:
- Design, develop, and maintain data solutions for data generation, collection, and processing
- Be a key team member that assists in design and development of the data pipeline
- Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
- Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
- Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
- Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
- Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
- Implement data security and privacy measures to protect sensitive data
- Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
- Collaborate and communicate effectively with product teams
- Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
- Identify and resolve complex data-related challenges
- Adhere to best practices for coding, testing, and designing reusable code/component
- Explore new tools and technologies that will help to improve ETL platform performance
- Participate in sprint planning meetings and provide estimations on technical implementation
Basic Qualifications and Experience:
- Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR
- Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience
Functional Skills:
Must-Have Skills
- Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
- Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools
- Excellent problem-solving skills and the ability to work with large, complex datasets
- Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
Good-to-Have Skills:
- Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
- Strong understanding of data modeling, data warehousing, and data integration concepts
- Knowledge of Python/R, Databricks, SageMaker, cloud data platforms
Soft Skills:
- Excellent critical-thinking and problem-solving skills
- Strong communication and collaboration skills
- Demonstrated awareness of how to function in a team setting
- Demonstrated presentation skills