Requirements:
• 1+ years of experience as a Data Engineer
• Experience with ETL / ELT pipelines;
• Experience with SQL ;
• Experience with Python (as a plus);
• Familiarity with AWS or Azure or GCP;
• Optional skills (as a plus):
• Experience with PySpark;
• Experience with Airflow;
• Experience with Databricks;
Key responsibilities:
• Create ETL/ELT pipelines and data management solutions;
• Perform data analysis;
• Apply data processing algorithms;
• Work with SQL-queries for data extraction and analysis;
• Working with AWS, Azure or GCP cloud platforms for data storage and processing;
• Performance optimization of ETL / ELT pipelines and data infrastructure;
• Data analysis and application of data processing algorithms to solve business problems;
• Working with AWS, Azure or GCP cloud platforms for data storage and processing.
We offer:
• Opportunity to work with the high-skilled engineering team on challenging projects;
• Interesting projects with new technologies;
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves, medical insurance;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.