Kinetic Personnel Group is recruiting for a Senior Data Engineer for a $5 billion/year Public Health Plan in the Ontario California area. This government agency is renowned for the work it does in the community and being a great place to work.
The Senior Data Engineer is responsible for the design, planning and development of Public Health Plan data solutions. The Senior Data Engineer will lead the design and development of data transformation. This position will be involved in data architecture. The Senior Data Engineer is expected to lead by example. In this role the Senior Data Engineer will follow coding standards throughout all aspects of the solution development to produce efficient and high-quality solutions, in addition to collaboration with, inter-departments to ensure member needs are met while simultaneously building strong peer relationships.
Job duties:
- Design, develop and implement reliable and effective data solutions based on business requirements.
- Maintain process design artifacts like data flow diagrams, end user process maps and technical design documents.
- Find trends in data sets and develop algorithms to help make raw data more useful to health plan.
- Create and maintain optimal data pipeline architecture that meet security standards.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
- Develop best practices for database design and development activities.
- Create complex functions, scripts, and services to support the Data Services team.
- Ensure all data solutions meet company and performance requirements.
- Work under minimal supervision with wide latitude for independent judgment.
- Conduct code reviews.
- Direct and mentor some level I and II engineers.
- Serve as a subject matter expert in key business projects.
- Recommend improvements to existing Data Services processes as necessary.
- Provide detailed analysis of data issues; data mapping; and the process for automation and enhancement of data quality.
- Analyze and integrate new technologies with existing applications to improve the design and functionality of applications.
- Maintain proficient programming skills in REST web services, C#.NET, TSQL, XML, JSON, and other relevant languages and/or frameworks.
- Develop and automate solutions to consume data from multiple data sources, including external API
- Program and modify code in languages like Java, Json, Python, and Spark to support and implement Data Warehouse solutions.
- Design and deploy enterprise-scale cloud infrastructure solutions.
- Research, analyze, recommend and select technical approaches for solving difficult and meaningful development and integration problems.\Work closely with the Data and Engineering teams to design best in class Azure implementations.
- Clearly and regularly communicate with management, colleagues, and domain units.
Requirements:
- Minimum of eight (8) years of experience in provisioning, configuring, and developing solutions in Azure Data Lake, Azure Data Factory, Azure SQL Data Warehouse, Azure Synapse and Cosmos DB.
- Eight (8) years implementing software development methodologies.
- Eight (8) years working with relational databases. Experience building and optimizing big data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes. Experience using Source Control and management tools such as Azure DevOps and Gitlab/GitHub. Experience transforming requirements into Design Concepts and ERDs using Visio and similar tools.
- Of the total eight, a minimum of five (5) years of hands-on experience with cloud orchestration and automation tools and CI/CD pipeline creation is required.
- Bachelor’s degree in a quantitative discipline such as Computer Science, Statistics, Mathematics or Engineering from an accredited institution required.
- Non-relational database (NoSQL) designs using MongoDB and others
- Relational databases like MS SQL Server
- Fluency with at least one scripting or programming language such as Python.
- Message queuing, stream processing, and highly scalable ‘big data’ data stores.
Job Types: Full-time, Permanent
Pay: $120,000.00 - $152,000.00 per year
Experience level:
Schedule:
- 8 hour shift
- Day shift
- Monday to Friday
Experience:
- Azure Data Lake: 4 years (Required)
- SQL: 8 years (Required)
- Data warehouse: 4 years (Required)
Ability to Commute:
- Ontario, CA 91761 (Required)
Ability to Relocate:
- Ontario, CA 91761: Relocate before starting work (Required)
Work Location: Hybrid remote in Ontario, CA 91761