Engineer 3 || Data
ApplyJob Experience: 3-5 Years
Role and Responsibilities:
- Implemented scalable and sustainable data engineering solutions using tools such as Databricks, Snowflake, Teradata, Apache Spark, and Python.
- Created, maintained, and optimized the data lines as workloads move from development to production for specific use cases.
- Owned end-to-end development, including coding, testing, debugging, and deployment.
- Drive automation through the effective use of modern tools, techniques, and architectures to completely automate the repeatable and tedious data preparation and integration tasks to improve productivity.
- Mapped data between source systems, data warehouses, and data marts.
- Trained counterparts in the data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
- Interfaced with other technology teams to extract, transform, and load data from a wide variety of data sources.
- Promoted the available data and analytics capabilities and expertise to business unit leaders and educated them on how to leverage these capabilities to achieve their business goals.
- SQL queries into Python code running on a distributed system.
- Developed libraries to re-use code.
- Eager to learn new technologies in a fast-paced environment.
- Good Communication.
Good to Have:
- Experienced with data pipeline and workflow management tools: Rundeck, Airflow, etc.
- Experienced with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Spark-Streaming