Job Description
Job Responsibilities:
- Develop framework to perform Extract, transform, and load data from a wide variety of SQL and No-SQL data sources
- Familiarity with Spark programming paradigms batch and stream processing
- Understanding of different data abstraction objects used in spark for different use cases use of optimal data format and other optimisation techniques
- Interpret large data sets and translate requirements into measurable outcomes
Job Requirements:
- Extensive experience with technologies such as Hadoop, Hive, Python and Spark
- 7+ years experience using SQL
- Strong communication skills and the ability to work with business stakeholders, translating technical requirements.
- Demonstrated experience with Source Code
- Applicants must be based in Sydney and have full working rights