Remote: Senior Data Engineer with Apache Spark jobs (pySpark and Scala

Bey

This is a direct-client opening for a Senior Data Engineer located in Los Angeles, CA. This is a 3-6+ month contract position.client is open for remote
The Senior Data Engineer is responsible for maintaining a unified high-volume data pipeline which collects and centralizes business data across the organization. Inbound data streams include core application metrics, sales records, and customer service data.

What you will need to do it:
4+ years experience
Bachelors or Masters in Computer science or related field
You will maintain and extend an existing pipeline which includes the following technologies:
- Apache Spark jobs (pySpark and Scala)
- Hadoop/AWS EMR (Java)
- Redshift
- Reporting dataflows making heavy use of AWS Datapipeline, Python, and HighCharts
- Docker with Kubernetes for job orchestration
Please complete the below skills-matrix and send back with your most updated resume.
Full Name:
Degree/Major:
Total Experience as a Data Engineer:
Total Experience with Apache Spark:
Total Experience with Hadoop:
Total Experience with Redshift:
Total Experience with AWS:
Total Experience with Docker/Kubernetes:
Expected Hourly Rate:
Is this Rate C2C or W2?
What is the link to your Linkedin profile?
What is the best phone number to reach you at?
Current City/State:
Availability:
Work Status (US Citizen, Green Card, etc.):
Are you ready to work onsite at Los Angeles, CA?

View this job on