at AbbVie Inc. in Mettawa, Illinois, United States
Job Description
Lead the planning, design, build and deployment of complex data onboarding and engineering projects from approved specifications, and subsequent iterations, using the operating model process and best practices. Work with business partners and stakeholders to understand data/reporting requirements. Design, develop and implement large scale, high-volume, high-performance data models and pipelines for Data Lake and Cloud Data Warehouse using Snowflake, AWS ecosystem, Hadoop ecosystem, Python, Data Model, Snow SQL, Py-spark, consumptions, etc. Manage and lead the build teams to ensure that the technical solution designs, code reviews, data architecture, data models and deployments are consistent with the architectural vision. Work closely with Architecture/Solutions team on data assessments and data gap analysis. Responsible for assembling large, complex data sets that meet functional and non-functional business requirements. Build and implement ETL frameworks to improve code quality and reliability. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL/NoSQL and AWS Big Data technologies. Build analytics applications that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Identify and address key constraints and requirements such as interfaces to downstream/ Upstream systems or packaged software. Work with Operations team for successful delivery and deployments handoffs. Maintain a working knowledge of message queuing, stream processing, and highly scalable ‘Big Data’ data stores. Create, manage and enhance data sharing matrix, catalog and dictionary across all systems and applications. Employ experience with Big Data platforms such as Cloudera (CDH/CDP), AWS EMR, Hadoop, Spark, Scala. Responsible for utilizing AWS services such as EC2, S3, RDS, EKS, Lambda, API Gateway, IAM, and Cloud warehouses such as Snowflake.
Bachelor’s degree or foreign academic equivalent in Management Information Sciences, Computer Science or a related field of study with at least seven (7) years of related experience in the following: (i) designing, developing and implementing large scale, high-volume, high-performance data models and pipelines for Data Lake and Cloud Data Warehouse using Snowflake, AWS ecosystem, Hadoop ecosystem, Python, Data Model, Snow SQL, Py-spark, consumptions, etc; (ii) building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL/NoSQL and AWS Big Data technologies; (iii) building analytics applications that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics; (iv) creating, managing and enhancing data sharing matrix, catalog and dictionary across all systems and applications; (v) Big Data platforms such as Cloudera (CDH/CDP), AWS EMR, Hadoop, Spark, Scala; and (vi) utilizing AWS services such as EC2, S3, RDS, EKS, Lambda, API Gateway, IAM, and Cloud warehouses such as Snowflake. An EOE. 40 Hrs./wk. (8:00 A.M to 5:00 P.M.); . Must have proof of legal authority to work in the United States.
Respond by email to: AbbVie Inc./Attn: L. Borre, job-opportunities@abbvie.com. Refer to ad code: ABV-0029-LB.