Design and implement data platforms for large-scale, high performance and scalable requirements, integrating data from several data sources, managing structured and unstructured data while melding existing warehouse structures.
Your daily activities
- Design and implement product features in collaboration with product owners, report developers, product analysts, architects, and business partners within an Agile / Scrum methodology.
- Design and implement data platforms for large-scale, high performance and scalable requirements, integrating data from several data sources, managing structured and unstructured data while melding existing warehouse structures.
- Analyze, diagnose and identify bottlenecks in data workflows
- Participate in demos to clients as well as requirements elicitation and translation to systems requirements (functional and nonfunctional).
- Constantly monitor, refine and report on the performance of data management system
What you need to know:
- Strong General Programming Skills.
- Solid engineering foundations (good coding practices, good architectural design skills).
- Solid experience with Python. If not proficient in Python, we expect the candidate to be proficient in other languages and prove their ability to learn new ones very quickly.
- Solid Experience with Spark.
- 4+ years of experience with large-scale data engineering with an emphasis on analytics and reporting.
- 2+ years of experience developing on Hadoop Ecosystem with the platforms such as AWS EMR, Cloudera or Hortonworks leverage tools like Pig, Hive.
- Experience building cloud scalable, real-time and high-performance Data Lake solutions.
- Proficiency designing and implementing ETL (Extract, Transform, load) processes, dealing with big volumes of data (terabytes of data which required distributed processing)
- Experience with Sqoop.
- Experience developing solution within AWS Services framework (EMR, EC2,RDS, Lambda, etc.)
- Experience with NoSQL databases such as Apache HBase, MongoDB or Cassandra.
- Experience in data streams processing technologies including Kafka, Spark Streaming, etc
- Advanced English level.
Other nice-to-have qualities we appreciate are:
- Experience working with SQL in advanced scenarios that require heavy optimization
- Experience with Elasticsearch.
We offer
- Career growth opportunities
- Competitive salary
Benefits and perks: We offer competitive compensation and employee-centric benefits, including industry-leading maternity and paternity leave, wellness programs, private medical insurance, discounts agreement on medical and dental specialties - Professional / personal life balance
- Unlimited vacations (we work for goals not for hours)
- English training with native teachers within our offices
- On-site and online continuous training (Our entire team must spend at least 50 hours learning something new a year)
All applicants will be considered for employment regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, veteran, or disability status.
Please note that by submitting your application, you agree with the terms and conditions of our Privacy Policy.