Data Engineer Remote Jobs

104 Results

Designworks Talent is hiring a Remote Senior Data Engineer

2d

Data Platform Engineer

FenergoDublin,County Dublin,Ireland, Remote Hybrid
salesforce

Fenergo is hiring a Remote Data Platform Engineer

Fenergo exists for one reason and that is to better enable financial institutions to onboard and service their customers digitally, safely, and compliantly. One very simple reason for being. And there are 700 of us at Fenergo who wake up every day thinking about how to improve the customer onboarding experience through technology. And we are the best in the world at it. Which is why we count 32 of the top 50 financial institutions amongst our customers.  

It is also why we are consistently ranked as #1 in Customer Lifecycle Management and why we count some of the world’s top companies as our technology partners, Salesforce, IBM, PWC, Accenture, DXC to name but a few. French and UK private equity firms have recently acquired a majority stake in Fenergo, valuing the business at over $1bn, and are looking to scale the business globally. Headquartered in Dublin, Ireland, Fenergo has offices in North America (Boston, New York and Toronto), UK (London), Spain (Madrid), Poland (Wroclaw), Asia Pacific (Sydney, Melbourne, Singapore, Hong Kong and Tokyo) and UAE (Dubai). 

What will you do?

As a Data Platform Engineer, you will play a key role in designing, building, and maintaining our cloud-based data lake infrastructure on AWS. You will work closely with a ringfenced team of system engineers, data analysts and a delivery manager to implement best practices for data management and contribute to the overall success of our data platform. This role is ideal for an experienced engineer with a strong technical background who is passionate about working on innovative data projects.

Responsibilities

  • Design and Implement Internal Data Lakes;
  • Develop and maintain data lake solutions using AWS service such as Amazon S3, AWS Glue and Amazon Athena
  • Develop Data Pipelines;
  • Build and manage data ingestion and transformation pipelines using tools like AWS Glue, Amazon Kinesis and Apache Spark
  • Support Data Processing;
  • Utilise AWS EMR, AWS Lambda and other services for data processing and transformation
  • Performance and Cost Optimisation;
  • Monitor and optimise data lake performance to ensure cost-effective resource utilisation
  • Compliance and Security;
  • Implement data security measures and ensure the data lake meets regulatory and compliance requirements like AWS IAM and RMS
  • Continuous Learning;
  • Stay updated on emerging cloud and data technologies and apply new insights to improve data lake operations
  • Collaborative Work
  • Work with cross functional teams to troubleshoot and resolve technical issues, providing support for data platform initiatives

Expectations

  • Bachelor's degree in Computer Science, Engineering, or related field.
  • 3+ years of experience in data engineering or related roles.
  • Strong experience with cloud platforms like AWS and its associated services
  • Strong problem-solving and analytical skills.
  • Experience with data visualization tools like Tableau or Power BI.
  • Knowledge of data governance and compliance standards.
  • Experience with Glue Jobs, Apache Spark or Python and SQL
  • Ability to write clean, maintainable and efficient code
  • Experience with orchestration tools like Apache Airflow would be advantageous

Our Promise To You  

We are striving to become global leaders across all of the categories we operate in and as part of that we are a high-performing highly collaborative team that works cross functionally to accommodate our clients needs.   

What we value is at the CORE of how we succeed: 

  • Collaboration: Working together to achieve our best 
  • Outcomes: Drive Success in every engagement  
  • Respect: A collective feeling of inclusion and belonging 
  • Excellence: Continuously raising the bar 

  • Healthcare cover through the VHI 
  • Company pension contribution  
  • Life assurance/ Income protection 
  • 23 days annual leave 
  • 3 company days 
  • Annual bonus opportunity 
  • Work From Home set-up allowance 
  • Opportunity to work with clients and colleagues on a global scale for a world leader in Client Lifecycle Management 
  • Other competitive company benefits, such as flexible working hours, work from home policy, bike to work scheme, sports and social committee, weekly fitness and sports classes and much more 
  • Buddy system for all new starters 
  • Collaborative working environment 
  • Extensive training programs, classroom and online, through ‘Fenergo University’ 
  • Opportunity to work on a cutting-edge Fintech Product, using the latest of tools and technologies 
  • Defined training and role tracking to allow you see and assess your own career development and progress. 
  • Active sports and social club 
  • State of the art offices in the heart of Dublin’s Docklands with great facilities, canteen and games area 

Diversity, Equality, and Inclusivity 

Fenergo is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace, where all employees are valued, respected, and can reach their full potential. We do not discriminate based on race, colour, religion, sex, national origin, age, disability, or any other characteristic protected by applicable law. Our hiring decisions are based solely on qualifications, merit, and business needs. We believe that a diverse workforce enriches our company culture, fosters innovation, and contributes to our overall success. We strive to provide a fair and supportive environment for all employees, promoting equal opportunities for career development and advancement. We encourage all qualified individuals to apply for employment opportunities and join our team in contributing to a collaborative and inclusive work environment. 

See more jobs at Fenergo

Apply for this job

3d

Lead Data Engineer

Blend36Edinburgh, United Kingdom, Remote
Designapigitdockerkubernetespython

Blend36 is hiring a Remote Lead Data Engineer

Job Description

Life as a Lead Data Engineer at Blend

We are looking for someone who is excited by the idea leading a data engineering squad to develop best in class analytical infrastructures and pipelines.

However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact.

Our Lead Data Engineer is someone who feels most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated.

Reporting to the Director of Data Engineering the role will work closely with the other Data Engineering Leads and other capabilities in the organisation such as the Data Science, Data Strategy and Business Development teams.

This role will be responsible for driving high delivery standards and innovation within the data engineering team and the wider company. This involves delivering data solutions to support the provision of actionable insights for stakeholders.

Our Data Engineering Leads remain hands-on and work with the Data Engineers within their squad.

What can you expect from the role?

  • Lead project delivery, covers overseeing the end-to-end delivery of projects and ensure robust project governance from a Data Engineering perspective.
  • Squad management, responsible for managing a team of Data Engineers, from Junior to Senior levels, covering resource utilization, training, mentoring, and recruitment.
  • Stakeholder engagement by preparing and present data-driven solutions to stakeholders, translating complex technical concepts into actionable insights.
  • Data pipeline ownership by designing, develop, deploy, and maintain scalable, reliable, and efficient data pipelines.
  • Domain expertise by keeping up to date on emerging trends and advancements within data ecosystems, ensuring the team remains at the forefront of innovation.
  • Business development support by providing expert input into proposal submissions and business development initiatives from a Data Engineering perspective.
  • Champion engineering excellence by promoting best practices for high-quality engineering standards that reduce future technical debt.
  • Evolve best practices by continuously refining the Data Engineering team's ways of working, driving alignment across the squad and wider teams.
  • Strategic contributions by collaborating with the Data Engineering Director on strategic workstreams, contributing to the Data Engineering strategy and Go-to-Market (GTM) initiatives.

Qualifications

What you need to have?

Experience:

  • Proven experience in leading technical teams and building applications using microservice architecture.
  • Extensive experience with Python and FastAPI.
  • Strong understanding of microservices principles and components such as logging, monitoring, health checks, scalability, resilience, service discovery, API gateways, and error handling.

Technical Skills:

  • Proficiency with Pydantic and data validation in FastAPI.
  • Experience with containerization technologies (e.g., Docker, Kubernetes).
  • Familiarity with CI/CD pipelines and tools.
  • Knowledge of API design and implementation.
  • Experience with monitoring and logging tools (e.g., Prometheus, Grafana, other).
  • Knowledge of security best practices in microservices architecture.
  • Familiarity with version control systems (e.g., Git) and collaborative development workflows.

Nice to Have:

  • Experience working with Retriever models, including implementation of chunking strategies.
  • Knowledge and use of vector databases.
  • Understanding of optimal approaches in querying LLM models via API.
  • Prompt engineering knowledge, with familiarity in different strategies.
  • Exposure to various prompt engineering techniques in different scenarios.

 

 

 

See more jobs at Blend36

Apply for this job

4d

Staff Data Science Engineer

InstacartUnited States - Remote
sqlapigitpythonbackend

Instacart is hiring a Remote Staff Data Science Engineer

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

OVERVIEW

ABOUT THE ROLE -We are currently seeking a Staff Data Science Engineer to join our cutting-edge team at Instacart, where we're transforming online grocery delivery with data-driven technical solutions. Our mission is to create impactful 0 to 1 intelligent systems that revolutionize the way we do business. As a key player in our dynamic environment, you'll engage in strategic development, pioneering new data collection and integration methods, while building intuitive tools to empower fellow data scientists. Collaborate cross-functionally to influence multiple facets of our product, merging data science with engineering to drive innovation.

 

ABOUT THE TEAM - The Data & Systems team focused on driving complex, data-driven solutions that span multiple teams, with an emphasis on online grocery delivery.

 

ABOUT THE JOB

The way we will execute on our mission is to lead:

  1. Intelligent systems strategy: Engage across the company and identify the most promising opportunity areas across the product along with contributing data expertise to refine and develop new product ideas
  2. Data collection and distribution: Identify and introduce new and valuable datasets to Instacart and onboard teams to use the new datasets
  3. Intelligent systems tooling: Build interfaces to Instacart’s infrastructure that enable data scientists to easily train and deploy new intelligent systems while automatically complying with all company policies
  4. Technical work with implications across multiple teams: Own building systems and performing analysis that impact multiple teams and is often deeply complex or critical to get right

 

The role requires strong XFN collaboration where an ideal candidate can:

  1. Drive critical efforts to completion with little oversight, while jumping into roles adjacent to data science (i.e. data engineering, machine learning engineer, etc).
  2. Bring new ideas to the team that get incorporated into the product roadmap
  3. Ruthlessly prioritize among requests from multiple competing stakeholders
  4. Act as a senior cross-functional leader, aligning the org on principles, processes, and goals

 

ABOUT YOU

MINIMUM QUALIFICATIONS

  • 6+ years of work experience in a data science or related field
  • Expert in Python, SQL, git, and Jupyter notebooks

Deployed several machine learning models to production and had to support those models

  • Know how machine learning algorithms work
  • Deeply passionate about building tools to enable data scientists to maximize their impact

PREFERRED QUALIFICATIONS

  • 8+ years of work experience in a data science or related field
  • Significant experience as a backend software engineer in a data-related space
  • Expert in scikit-learn - you know the philosophy behind the API extremely well



Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

For US based candidates, the base pay ranges for a successful candidate are listed below.

CA, NY, CT, NJ
$255,000$283,000 USD
WA
$245,000$272,000 USD
OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
$234,000$260,000 USD
All other states
$212,000$235,000 USD

See more jobs at Instacart

Apply for this job

4d

Databricks Engineer

MobicaRemote, Poland
DevOPSBachelor's degreesqlazure.netpython

Mobica is hiring a Remote Databricks Engineer

Job Description

As a Databricks Engineer, this role offers the opportunity to work with a team in building, optimizing, and maintaining data analytics solutions using Azure cloud services and the Databricks platform.

Key responsibilities:

  • Provide technical expertise on integrating Databricks with Azure and optimizing data processing workflows
  • Collaborate with stakeholders to translate business requirements into technical solutions
  • Implement DevOps practices using Azure DevOps for project management and deployment automation
  • Ensure adherence to security and monitoring best practices in the production environment

Qualifications

Must have:

  • Bachelor's degree in Computer Science, Engineering, or related field
  • Experience in SQL, Python, PySpark, and Databricks
  • Strong understanding of cloud services, particularly Azure Databricks and Azure SQL
  • Certifications in Azure or Databricks are preferred
  • Strong problem-solving and analytical skills
  • Ability to effectively communicate technical concepts to non-technical stakeholders
  • Proven track record of leading development teams and delivering successful projects
  • Continuous learning mindset to keep up with the latest technologies and best practices

Nice to have:

  • Experience with Azure DevOps for version control and CI/CD pipelines
  • Knowledge of Azure Web Apps, .NET, Azure Functions, Azure Automation, Azure Logic Apps, and Azure Event Hubs

See more jobs at Mobica

Apply for this job

4d

Azure Data Engineer

MobicaRemote, Poland
DevOPSLambdaagile10 years of experiencesqlazurepython

Mobica is hiring a Remote Azure Data Engineer

Job Description

As Azure Data Engineer you help the team in setting-up a new DWH environment in Microsoft Azure and deliver the existing on premise data ingestion and data analytics functionality in Azure cloud. You also contribute to fine-tuning the Target architecture with best suitable tools / technologies. In addition to hands-on, you help / guide the team with best practices, technology standards, possible automations etc. in Azure cloud architecture.

Qualifications

Must have skills:

  • 5-10 years of experience primarily in Cloud Data engineering, Data warehousing projects
  • Experience in Microsoft Azure enterprise Data Lake platform with
    • Azure Data Lake Storage
    • Azure Data Factory
    • Azure Data Factory Dataflow
    • Databricks using Python
    • Azure Functions
    • Azure SQL DB / Synapse
  • Experience working in Python and PowerShell
  • Experience in building and deploying pipelines in Azure DevOps
  • Familiarity with Azure security services – Active Directory, Azure Key Vault, Managed Identities, RBAC, Firewall on VNET & SubNets
  • Extensive experience in software development in an agile DevOps environment
  • Good communication & interfacing skills and able to work in a structured manner
  • Strong in collaboration , Flexibility and showing entrepreneurship
  • Eager to be open, learn, advice & implement new technologies
  • Good command of English

Nice to have skills:

  • Development of APIs to expose data through REST APIs
  • Knowledge on Big Data batch and stream ingestion (Lambda architecture)
  • Fair knowledge of Data modelling (specifically Dimensional modelling)

See more jobs at Mobica

Apply for this job

4d

Snowflake Data Engineer

MobicaRemote, Poland
Bachelor's degreesqlDesignazurepythonAWS

Mobica is hiring a Remote Snowflake Data Engineer

Job Description

As a Snowflake Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines and solutions using Snowflake. You will work closely with data architects, data scientists, and business stakeholders to ensure the efficient and effective use of data. Your role will involve building scalable data solutions, optimizing performance, and ensuring data quality and security.

Key Responsibilities:

  • Design and develop data pipelines and ETL processes using Snowflake.
  • Collaborate with data architects and business stakeholders to understand data requirements and translate them into technical solutions.
  • Implement data models, data warehouses, and data marts in Snowflake.
  • Optimize Snowflake performance, including query performance tuning and resource management.
  • Ensure data quality, data governance, and data security best practices are followed.
  • Troubleshoot and resolve data-related issues, ensuring timely and effective solutions.
  • Develop and maintain documentation for data pipelines, data models, and data processes.
  • Provide technical support and guidance to team members and stakeholders.
  • Lead and execute data migration projects, including migrating data from legacy systems to Snowflake.
  • Develop and implement strategies for data migration, ensuring minimal downtime and data integrity during the migration process.
  • Design and implement data lake solutions, integrating them with Snowflake to support large-scale data storage and processing.

Qualifications

Must have:

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Proven experience in designing and developing data solutions using Snowflake.
  • Strong knowledge of SQL and ETL processes.
  • Experience with data modeling, data warehousing, and data integration.
  • Proficiency in scripting languages such as Python or Java.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and interpersonal skills.
  • Ability to work collaboratively in a team environment.
  • Good command of English

Nice to have:

  • Exposure to data governance best practices is a plus
  • Certification in Snowflake is an advantage

See more jobs at Mobica

Apply for this job

7d

Staff Data Engineer

SonderMindDenver, CO or Remote
S34 years of experienceterraformscalanosqlairflowsqlDesignmongodbpytestapijavac++dockerkubernetespythonAWSbackend

SonderMind is hiring a Remote Staff Data Engineer

About SonderMind

At SonderMind, we know that therapy works. SonderMind provides accessible, personalized mental healthcare that produces high-quality outcomes for patients. SonderMind's individualized approach to care starts with using innovative technology to help people not just find a therapist, but find the right, in-network therapist for them, should they choose to use their insurance. From there, SonderMind's clinicians are committed to delivering best-in-class care to all patients by focusing on high-quality clinical outcomes. To enable our clinicians to thrive, SonderMind defines care expectations while providing tools such as clinical note-taking, secure telehealth capabilities, outcome measurement, messaging, and direct booking.

To follow the latest SonderMind news, get to know our clients, and learn about what it’s like to work at SonderMind, you can follow us on Instagram, Linkedin, and Twitter. 

About the Role

In this role, you will be responsible for designing, building, and managing the information infrastructure systems used to collect, store, process, and distribute production and reporting data. This role will work closely with software and data engineers, as well as data scientists, to deploy Applied Science services. This role will also interact with business analysts and technical marketing teams to ensure they have the data necessary to complete their analyses and campaigns.

What you will do 

  • Strategically design, construct, install, test, and maintain highly scalable data management systems
  • Develop and maintain databases, data processing procedures, and pipelines
  • Integrate new data management technologies and software engineering tools into existing structures
  • Develop processes for data mining, data modeling, and data production
  • Translate complex functional and technical requirements into detailed architecture, design, and high-performing software and applications
  • Create custom software components and analytics applications
  • Troubleshoot data-related issues and perform root cause analysis to resolve them
  • Manage overall pipeline orchestration
  • Optimize data warehouse performance

What does success look like?

Success in this role will be gauged by the seamless and efficient operations of data infrastructure. This includes minimal downtime, accurate and timely data delivery and the successful implementation of new technologies and tools. The individual will have demonstrated their ability to collaborate effectively to define solutions with both technical and non-technical team members across data science, engineering, product and our core business functions. They will have made significant contributions to improving our data systems, whether through optimizing existing processes or developing innovative new solutions. Ultimately, their work will enable more informed and effective decision-making across the organization.

Who You Are 

Skills, experience, and education that is needed for this person to be able to succeed in this role 

  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • Minimum 4 years experience as a Data Engineer or in a similar role
  • Proficient with scripting and programming languages (Python, Java, Scala, etc.)
  • In-depth knowledge of SQL and other database related technologies
  • Experience with Snowflake, DBT, BigQuery, Fivetran, Segment, etc.
  • Experience with AWS cloud services (S3, RDS, Redshift, etc.)
  • Experience with data pipeline and workflow management tools such as Airflow
  • Backend Development experience with the following:
    • REST API design using web frameworks such as FastAPI, Flask
    • Data modeling for microservices, especially using NoSQL databases like MongoDB
    • CI/CD pipelines (Gitlab preferred) and microservices deployment to AWS cloud
    • Docker, Kubernetes, Helm Charts, Terraform
    • Developing unit tests for microservices using testing frameworks like pytest
  • Strong negotiation and interpersonal skills: written, verbal, analytical
  • Motivated and influential – proactive with the ability to adhere to deadlines; work to “get the job done” in a fast-paced environment
  • Self-starter with the ability to multi-task

Our Benefits 

The anticipated salary range for this role will be $132,000-165,000.

As leaders in redesigning behavioral health, we walk the walk with our employees' benefits. We want the experience of working at SonderMind to accelerate people’s careers and enrich their lives, so we focus on meeting SonderMinders wherever they are and supporting them in all facets of their lives and work.

Our benefits include:

  • A commitment to fostering flexible hybrid work
  • A generous PTO policy with a minimum of three weeks off per year
  • Free therapy coverage benefits to ensure our employees have access to the care they need (must be enrolled in our medical plans to participate)
  • Competitive Medical, Dental, and Vision coverage with plans to meet every need, including HSA ($1,100 company contribution) and FSA options
  • Employer-paid short-term, long-term disability, life & AD&D to cover life's unexpected events. Not only that, we also cover the difference in salary for up to seven (7) weeks of short-term disability leave (after the required waiting period) should you need to use it.
  • Eight weeks of paid Parental Leave (if the parent also qualifies for STD, this benefit is in addition which allows between 8-16 weeks of paid leave)
  • 401K retirement plan with 100% matching which immediately vests on up to 4% of base salary
  • Travel to Denver 1x a year for annual Shift gathering
  • Fourteen (14) company holidays
  • Company Shutdown between Christmas and New Years
  • Supplemental life insurance, pet insurance coverage, commuter benefits and more!

Application Deadline

This position will be an ongoing recruitment process and will be open until filled.

Equal Opportunity 
SonderMind does not discriminate in employment opportunities or practices based on race, color, creed, sex, gender, gender identity or expression, pregnancy, childbirth or related medical conditions, religion, veteran and military status, marital status, registered domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition (including genetic information or characteristics), sexual orientation, or any other characteristic protected by applicable federal, state, or local laws.

Apply for this job

PredictionHealth is hiring a Remote Clinical Data Engineer

9d

Data Engineer

Wider CircleUnited States, Remote

Wider Circle is hiring a Remote Data Engineer

Overview: 

Data Engineers serve a unique and important role in daily operations at Wider Circle. Customer data is the bedrock of our business, and Data Engineering is responsible for laying the foundation for our success. Data Engineers work with internal and external stakeholders to gather, validate, clean and move data inside and outside the organization using technology and automation.  Our data engineering team is also responsible for quality curation of data to ensure our products are released on time and with minimal errors and/or bugs.

You will be joining a talented, fully remote Data Science, Engineering and Analytics team that handles a wide range of requests including customer data processing, weekly report automation, new product development and complex data integration. 

Company Overview

At Wider Circle, we connect neighbors for better health. Wider Circle's groundbreaking Connect for Life® program brings neighbors together in-person and online for health, wellness, and social activities that improve mental and physical health. We create webs of community circles by employing local and culturally competent engagement specialists, whose hand-on-hand approach to forming trusted circles is informed by a sophisticated analytics platform. We are on a mission to make the world a better place for older adults and disadvantaged communities. 

Immerse yourself in our LOVE, LEARN, GROW culture, where the ethos of making a profound impact, fostering respect, and nurturing career development reign supreme. We offer competitive compensation, benefits, and policies meticulously crafted to uphold our unwavering commitment to our internal team and the communities we proudly serve. Join us in shaping healthier futures and embracing boundless personal and collective growth opportunities.

Responsibilities

  • Develop and maintain data quality and accuracy dashboards, and scorecards to track data quality and model performance.
  • Develop, maintain, and enhance a comprehensive data quality framework that defines data standards, quality and accuracy expectations, and validation processes.
  • Enhance our data quality through rapid testing, feedback and insights.
  • Partnering with Engineering & Product to predict data quality issues and production flaws.
  • Conceptualize data architecture (visually) and implement practically into logical structures.
  • Performing testing of data after ingesting and database loading.
  • Manage internal SLAs for data quality and frequency.
  • Provide expert support for solving complex problems of data integration across multiple data sets.
  • Updating and evolving our data ecosystem to streamline processes for maximum efficiency.
  • Degree in Computer Science, Information Systems, or equivalent education or work experience
  • Experience with AWS or similar (S3, Redshift, RDS, EMR) 3+ Years
  • Strong abilities with SQL & Python 3+ Years
  • Building test automation suites for test and production environments
  • Experience using API's for data extraction and updating
  • Experience with Git and version control

Really Nice to Have:

  • Experience with Healthcare Data (Claims, CDAs/HRAs, Eligibility)
  • Experience using Salesforce (Salesforce API)
  • Matillion, Mulesoft, or related tooling
  • Airflow, cron, or other automation tools
  • Experience working with Data Packages written in R or Python
  • Experience partnering with Data Scientists to optimize or productionalize models

Location

  • This fully remote position offers the flexibility to work from anywhere while contributing to meaningful projects in a supportive and dynamic environment.

Submission Requirements

  • Candidates must submit a GitHub or Bitbucket repository or provide coding samples of completed projects along with their resumes. We look forward to seeing your work!

Compensation

As a venture-backed company, Wider Circle offers competitive compensation including:

  • Performance-based incentive bonuses
  • Opportunity to grow with the company
  • Comprehensive health coverage including medical, dental, and vision
  • 401(k) Plan
  • Paid Time Off
  • Employee Assistance Program
  • Health Care FSA
  • Dependent Care FSA
  • Health Savings Account
  • Voluntary Disability Benefits
  • Basic Life and AD&D Insurance
  • Adoption Assistance Program
  • Training and Development
  • $95,000-$110,000

And most importantly, an opportunity to Love, Learn, and Grow while making the world a better place!

Wider Circle is proud to be an equal opportunity employer that does not tolerate discrimination or harassment of any kind. Our commitment to Diversity & Inclusion supports our ability to build diverse teams and develop inclusive work environments. We believe in empowering people and valuing their differences. We are committed to equal employment opportunities without consideration of race, color, religion, ethnicity, citizenship, political activity or affiliation, marital status, age, national origin, ancestry, disability, veteran status, sexual orientation, gender identity, gender expression, sex or gender, or any other basis protected by law. 

See more jobs at Wider Circle

Apply for this job

11d

Data Engineer

SonderMindDenver, CO or Remote
S3Master’s DegreescalanosqlsqlDesignjavac++dockerkubernetespythonAWS

SonderMind is hiring a Remote Data Engineer

 About SonderMind 

At SonderMind, we know that therapy works. SonderMind provides accessible, personalized mental healthcare that produces high-quality outcomes for patients. SonderMind's individualized approach to care starts with using innovative technology to help people not just find a therapist, but find the right, in-network therapist for them, should they choose to use their insurance. From there, SonderMind's clinicians are committed to delivering best-in-class care to all patients by focusing on high-quality clinical outcomes. To enable our clinicians to thrive, SonderMind defines care expectations while providing tools such as clinical note-taking, secure telehealth capabilities, outcome measurement, messaging, and direct booking.

To follow the latest SonderMind news, get to know our clients, and learn about what it’s like to work at SonderMind, you can follow us on Instagram, Linkedin, and Twitter.

 

About the Role

In this role, you will be responsible for designing, building, and managing the information infrastructure systems used to collect, store, process, and distribute data. You will also be tasked with transforming data into a format that can be easily analyzed. You will work closely with data engineers on data architectures and with data scientists and business analysts to ensure they have the data necessary to complete their analyses and to provide production service support on the data and their transformation whenever needed.

 

Essential Functions

  • Strategically design, construct, install, test, and maintain highly scalable data management systems that include distributed databases, data warehouses, and cloud storage, etc.
  • Develop and maintain databases, data processing procedures, and pipelines
  • Integrate new data management technologies and software engineering tools into existing structures
  • Develop and implement processes for data mining, data modeling, and dataproduction
  • Translate complex functional and technical requirements into detailed architecture, design, and high-performing software and applications
  • Create custom software components and analytics applications in Python and SQL
  • Troubleshoot data-related issues and perform root cause analysis to resolve them
  • Manage overall pipeline orchestration
  • Optimize data warehouse performance

 

What does success look like?

Success in this role will be measured by the seamless and efficient operations of our data infrastructure. This includes maintaining high availability, ensuring optimal performance, and implementing robust security measures. Achieving success will also involve proactive monitoring, timely issue resolution, and continuous improvements to our systems. The ideal outcome is a stable, scalable, and secure data environment that supports the organization's goals and enables data-driven decision-making. This includes minimal downtime, accurate and timely data delivery and the successful implementation of new technologies and tools. The individual will have demonstrated their ability to collaborate effectively to define solutions with both technical andnon-technical team members across data science, engineering, product and our core business functions. They will have made significant contributions to improving our data systems, whether through optimizing existing processes or developing innovative new solutions. Ultimately, their work will enable more informed and effective decision-making across the organization.

Who You Are

  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • Minimum three years experience as a Data Engineer or in a similar role
  • Experience with data science and analytics engineering is a plus
  • Experience with AI/ML in GenAI or data software - including vector databases - is a plus
  • Proficient with scripting and programming languages (Python, Java, Scala, etc.)
  • In-depth knowledge of SQL and other database related technologies such as NoSQL
  • Experience with Snowflake, DBT, BigQuery, Fivetran, Segment, etc.
  • Experience with AWS cloud services (S3, RDS, Redshift, etc.)
  • Experience with data pipeline and workflow management tools such as Airflow
  • Strong negotiation and interpersonal skills: written, verbal, analytical
  • Motivated and influential – proactive with the ability to adhere to deadlines; work to “get the job done” in a fast-paced environment
  • Self-starter with the ability to multi-task

Preferred experience 

  • Master’s degree in Computer Science, Engineering, or a related field
  • Knowledge of Docker and Kubernetes to manage data applications in a scalable and efficient way
  • Proficiency with ETL (Extract, Transform, Load) tools like Apache NiFi, Talend, or Informatica is a big advantage

Our Benefits 

The anticipated salary rate for this role is between $108,000-135,000 per year.

As a leader in redesigning behavioral health, we are walking the walk with our employee benefits. We want the experience of working at SonderMind to accelerate people’s careers and enrich their lives, so we focus on meeting SonderMinders wherever they are and supporting them in all facets of their life and work.

Our benefits include:

  • A commitment to fostering flexible hybrid work
  • A generous PTO policy with a minimum of three weeks off per year
  • Free therapy coverage benefits to ensure our employees have access to the care they need (must be enrolled in our medical plans to participate)
  • Competitive Medical, Dental, and Vision coverage with plans to meet every need, including HSA ($1,100 company contribution) and FSA options
  • Employer-paid short-term, long-term disability, life & AD&D to cover life's unexpected events. Not only that, we also cover the difference in salary for up to seven (7) weeks of short-term disability leave (after the required waiting period) should you need to use it.
  • Eight weeks of paid Parental Leave (if the parent also qualifies for STD, this benefit is in addition which allows between 8-16 weeks of paid leave)
  • 401K retirement plan with 100% matching which immediately vests on up to 4% of base salary
  • Travel to Denver 1x a year for annual Shift gathering
  • Fourteen (14) company holidays
  • Company Shutdown between Christmas and New Years
  • Supplemental life insurance, pet insurance coverage, commuter benefits and more!

Application Deadline

This position will be an ongoing recruitment process and will be open until filled.

 

Equal Opportunity 
SonderMind does not discriminate in employment opportunities or practices based on race, color, creed, sex, gender, gender identity or expression, pregnancy, childbirth or related medical conditions, religion, veteran and military status, marital status, registered domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition (including genetic information or characteristics), sexual orientation, or any other characteristic protected by applicable federal, state, or local laws.

 

Apply for this job

11d

Lead Data Engineer

RKS ConsultingBordeaux, France, Remote
DevOPSterraformsqlazuregitdockerpostgresqlkubernetespython

RKS Consulting is hiring a Remote Lead Data Engineer

Description du poste

Points obligatoires :
Proposer et mettre en œuvre des architectures DATA selon les principes d’Agilité et de CICD
Réduction des coûts dans une démarche FinOps
Encadrement des data engineers et mise à disposition d’outillages pour le monitoring et le développement

Environnement technique :
Langages : SQL, Python
Bases de données : Snowflake, SQL Server, PostgreSQL
DevOps : Git, TFS, Azure DevOps, Docker, Kubernetes, Liquibase
Messaging : Kafka
Infrastructure As Code : Terraform
Data Processing : DBT, Talend
Data Visualization : Power BI

Expérience confirmée dans l’utilisation des outils de la stack technique mentionnée
Veille technologique sur les architectures DATA et industrialisation des applications

Une expérience d'au moins 5 ans est exigée.

Qualifications

See more jobs at RKS Consulting

Apply for this job

12d

Sr. Data Engineer

Talent ConnectionPleasanton, CA, Remote
Designjava

Talent Connection is hiring a Remote Sr. Data Engineer

Job Description

Position Overview: As a Sr. Data Engineer, you will be pivotal in developing and maintaining data solutions that enhance our client's reporting and analytics capabilities. You will leverage a variety of data technologies to construct scalable, efficient data pipelines that support critical business insights and decision-making processes.

Key Responsibilities:

  • Architect and design data pipelines that meet reporting and analytics requirements.
  • Develop robust and scalable data pipelines to integrate data from diverse sources into a cloud-based data platform.
  • Convert business needs into architecturally sound data solutions.
  • Lead data modernization projects, providing technical guidance and setting design standards.
  • Optimize data performance and ensure prompt resolution of issues.
  • Collaborate with cross-functional teams to create efficient data flows.

Qualifications

Required Skills and Experience:

  • 7+ years of experience in data engineering and pipeline development.
  • 5+ years of experience in data modeling for data warehousing and analytics.
  • Proficiency with modern data architecture and cloud data platforms, including Snowflake and Azure.
  • Bachelor’s Degree in Computer Science, Information Systems, Engineering, Business Analytics, or a related field.
  • Strong skills in programming languages such as Java and Python.
  • Experience with data orchestration tools and DevOps/Data Ops practices.
  • Excellent communication skills, capable of simplifying complex information.

Preferred Skills:

  • Experience in the retail industry.
  • Familiarity with reporting tools such as MicroStrategy and Power BI.
  • Experience with tools like Streamsets and dbt.

See more jobs at Talent Connection

Apply for this job

13d

Senior Data Engineer

InstacartUnited States - Remote
airflowsqlDesignbackendfrontend

Instacart is hiring a Remote Senior Data Engineer

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

Overview

At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

 

About the Role - 

This is a general posting for multiple Senior Data Engineer roles open across our 4-sided marketplace. You’ll get the chance to learn about the different problems Data Engineering teams solve as you go through the process. Towards the end of your process, we’ll do a team-matching exercise to determine which of the open roles/teams you’ll join. You can find a blurb on each team at the bottom of this page. 

 

About the Team -

You will be joining a growing data engineering team and will tackle some of the most challenging and impactful problems that are transforming how people buy groceries every day. You will be embedded within a data-driven team as a trusted partner in uncovering barriers in the product’s usability and utilize these insights to inform product improvements that drive angle-changing growth. We’re looking for a self-driven engineer who can hit the ground running to ultimately shape the landscape of data across the entire organization.



About the Job 

  • You will be part of a team with a large amount of ownership and autonomy.
  • Large scope for company-level impact working on financial data.
  • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
  • You will ship high quality, scalable and robust solutions with a sense of urgency.
  • You will have the freedom to suggest and drive organization-wide initiatives.

 

About You

Minimum Qualifications

  • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
  • Expert with SQL and  knowledge of Python.
  • Experience building high quality ETL/ELT pipelines.
  • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
  • Experience building data pipelines for accounting/billing purposes.
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
  • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
  • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
  • Experience working with a large codebase on a cross functional team.

 

Preferred Qualifications

  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Snowflake, dbt (data build tool) and Airflow
  • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo etc.

 

Open Roles

 

Finance Data Engineering 

About the Role -

The Finance data engineering team plays a critical role in defining how financial data is modeled and standardized for uniform, reliable, timely and accurate reporting. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping the post-IPO financial data vision and roadmap for the company.

About the Team -

Finance data engineering is part of the Infrastructure Engineering pillar, working closely with accounting, billing & revenue teams to support the monthly/quarterly book close, retailer invoicing and internal/external financial reporting. Our team collaborates closely with product teams to capture critical data needed for financial use cases.

 

Growth Data and Marketing

About the Role - 

The Growth Data and Marketing team is an integral piece of Growth at Instacart, and is directly responsible for modeling the Paid Marketing world to ensure accurate and timely data. This role will be pivotal in building and maintaining data infrastructure that supports the performance and optimization of paid marketing campaigns. You will be at the center of driving data-driven decisions for paid marketing strategies.

About the Team -

The Growth Data and Marketing team is part of the Growth Systems org, working closely with Frontend and Backend Engineering, Data Scientists, and Machine Learning teams to drive and support key data decisions to dictate product, partnerships. Our team works closely with product and marketing teams to ensure the right data is used at the right time to drive key business metrics. 

 

Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

For US based candidates, the base pay ranges for a successful candidate are listed below.

CA, NY, CT, NJ
$221,000$245,000 USD
WA
$212,000$235,000 USD
OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
$203,000$225,000 USD
All other states
$183,000$203,000 USD

See more jobs at Instacart

Apply for this job

15d

Data Engineer

Much Better AdventuresUnited Kingdom, Remote
remote-first

Much Better Adventures is hiring a Remote Data Engineer

We’re an ambitious, remote-first travel scale-up, eager to grow our team with an exceptional Data Engineer. If you’re passionate about leveraging data to solve meaningful problems, enjoy creating scalable infrastructure, and have a love for the outdoors, this is the opportunity for you!

The Role

You’ll be our first dedicated Data hire, building on our strong foundations to take our capabilities to the next level. You’ll work closely with product, engineering, and business teams to refine our data pipelines, optimise tracking, and empower decision-making across the company.

From enhancing our DBT pipelines and maintaining our Redshift warehouse to improving tracking with Segment and other analytics tools, you’ll play a key role in making sure we can deliver exceptional experiences for our customers and valuable insights for our team.

This is a hands-on role that will see you own our data engineering processes while collaborating across teams to support a wide variety of data-driven initiatives, from reporting and analytics to the groundwork for machine learning models.

Why You’ll Love It Here

We’re driven by a shared passion for leveraging data to solve meaningful problems and unlock new opportunities. Reporting directly to the CTO, you’ll have the autonomy to shape our data strategy while collaborating closely with others who are equally invested in creating a data-driven culture that drives impactful decisions.

We embrace a culture of learning and improvement, constantly evolving how we work to suit the challenges we face. You’ll find a supportive, collaborative environment where ideas are valued, feedback is encouraged, and experimentation is part of our DNA.

Key Responsibilities

  • Enhance and scale our data infrastructure: Build upon our existing DBT pipelines, Redshift warehouse, and Segment integration to ensure robust, scalable, and accessible data systems.
  • Optimise event tracking and analytics: Improve tracking and customer insights using tools like Segment, PostHog, and GA4.
  • Enable self-service data capabilities: Create data marts and user-friendly dashboards that empower teams to make informed, data-driven decisions.
  • Collaborate with stakeholders: Partner with product, engineering, and business teams to understand data needs and deliver impactful solutions.
  • Prepare for advanced analytics: Lay the groundwork for data science projects such as churn forecasting, dynamic pricing, and recommendation engines.
  • Maintain data quality: Ensure all data pipelines, systems, and models are accurate, reliable, and performant.
  • Deliver value through automation: Streamline reporting processes to provide clear, timely insights with minimal manual intervention.

Broad Technical Experience:

  • Proven experience with Python and SQL, including DBT for transforming data.
  • Proven experience working with modern data warehousing systems like RedshiftSnowflake, or similar.

Bonus points:

  • Proven experience working with tracking and analytics tools like SegmentGA4, or PostHog.
  • Experience integrating data pipelines with services like Stitch Data or other ETL tools.
  • Hands-on experience with data orchestration tools like Airflow or Dagster.
  • Knowledge of containerisation (e.g., Docker), deployment pipelines, and monitoring tools.

Data Mindset:

  • You are passionate about creating scalable data solutions that deliver meaningful value.
  • You are comfortable engaging with non-technical stakeholders to understand their needs and design appropriate solutions.
  • You prioritise keeping data pipelines reliable and optimised while iterating on new challenges.
  • You build data tools that are accessible and valuable to the people who use them.
  • Driven to solve real-world problems for our hosts and our internal team.

Engineering Mindset:

  • You take time to understand the problem and design solutions before executing.
  • You approach projects with metrics in mind ensuring success is measured objectively.
  • You thrive in environments with fast feedback loops and continuous improvement.

Experience Level

  • Mid to senior.
  • An entrepreneurial and creative environment where great ideas are actively encouraged, and taking responsibility for them is expected
  • The warm fuzzy feeling that comes with knowing you are making a huge difference to small independent businesses, local economies and communities
  • 38 days holiday per year (inclusive of public holidays) - to be used when you like
  • Annual company performance-based bonus
  • Flexible hours set up (40 hours p/w for full time roles), and a fully remote company
  • Company-wide, adventurous meet-ups
  • Experience what we do: everyone goes on a free MBA trip within their first year
  • A £500 annual travel voucher to spend on an MBA trip/s
  • 30% Employee discount, plus 15% friends and family discount for MBA trips
  • Generous Pension scheme (UK employees only)
  • Free access to private GP, and unlimited mental health support and counselling via our partner at BHSF.
  • Budget to set up a remote working space and access to co-working spaces
  • Supportive Maternity and Paternity Pay: we offer 16 weeks full pay if you’re the primary caregiver & 4 weeks full pay if you’re the secondary caregiver.

What does the typical interview process look like?

Our hiring process is fully remote, and all interviews are done online. Every application is carefully read by a real member of the team (no AI screening here).

  • Stage 1: A short automated coding assessment
  • Stage 2: A ‘get to know each other’ interview, to find out more about your experience and see if we’re a good fit. (approx 30–45 mins)
  • Stage 3: A technical assignment, plus preparation for a short presentation to be given in the interview.
  • Stage 4: In-depth interview where we review your assignment, listen to your presentation, and take a look at some code with two members of the MBA team. (Approx 60–90 mins)

Job ‘Need to Know’ details

  • Preferred Start Date: Jan / Feb 2025
  • Salary Range: £55-75k, depending on experience.
  • Working Hours: a full time role is 40 hours per week, with core hours being 1000 - 1500 GMT (regardless of where you are based), and a flexible hours policy for the remaining time. We also welcome applicants from those wanting to work part-time, but we require 80% (32 hours) minimum.
  • Location: you must be resident either in the UK or in Europe (max +2 hours GMT) 
    Note: Contract and benefits will vary depending on which country you are based in - this will be discussed at an appropriate stage in the interview process.

We are an equal opportunities employer and strongly encourage applications from a diverse range of backgrounds and industries. Our flexible working arrangements are designed to support everyone in the team to achieve that important work/life balance in a way that works for their particular circumstances.

See more jobs at Much Better Adventures

Apply for this job

17d

(Senior) Data Engineer - France

ShippeoParis, France, Remote
MLairflowsqlRabbitMQdockerkubernetespython

Shippeo is hiring a Remote (Senior) Data Engineer - France

Job Description

The Data Intelligence Tribe is responsible for leveraging Shippeo’s data from our large shipper and carrier base, to build data products that help our users (shippers and carriers alike) and ML models to provide predictive insights. This tribe’s typical responsibilities are to:

  • get accurately alerted in advance of any potential delays on their multimodal flows or anomalies so that they can proactively anticipate any resulting disruptions

  • extract the data they need, get direct access to it or analyze it directly on the platform to gain actionable insights that can help them increase their operational performance and the quality and compliance of their tracking

  • provide best-in-class data quality by implementing advanced cleansing & enhancement rules

As a Data Engineer at Shippeo, your objective is to ensure that data is available and exploitable by our Data Scientists and Analysts on our different data platforms. You will contribute to the construction and maintenance of Shippeo’s modern data stack that’s composed of different technology blocks:

  • Data Acquisition (Kafka, KafkaConnect, RabbitMQ),

  • Batch data transformation (Airflow, DBT),

  • Cloud Data Warehousing (Snowflake, BigQuery),

  • Stream/event data processing (Python, docker, Kubernetes) and all the underlying infrastructure that support these use cases.

 

Qualifications

Required:

  • You have a degree (MSc or equivalent) in Computer Science.

  • 3+ years of experience as a Data Engineer.

  • Experience building, maintaining, testing and optimizing data pipelines and architectures

  • Programming skills in Python 

  • Advanced working knowledge of SQL, experience working with relational databases and familiarity with a variety of databases.

  • Working knowledge of message queuing and stream processing.

  • Advanced knowledge of Docker and Kubernetes.

  • Advanced knowledge of a cloud platform (preferably GCP).

  • Advanced knowledge of a cloud based data warehouse solution (preferably Snowflake).

  • Experience with Infrastructure as code (Terraform/Terragrunt)

  • Experience building and evolving CI/CD pipelines (Github Actions).

Desired: 

  • Experience with Kafka and KafkaConnect (Debezium).

  • Monitoring and alerting on Grafana / Prometheus.

  • Experience working on Apache Nifi.

  • Experience working with workflow management systems such as Airflow.

See more jobs at Shippeo

Apply for this job

18d

Data Engineer

XeSantiago,Santiago Metropolitan Region,Chile, Remote Hybrid
sqlDesignmobileapidockerpython

Xe is hiring a Remote Data Engineer

At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments.

At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

XE is looking for an experienced Data Engineer to join the XE Data Science team. If you are easily excited by raw data and the boundless potential to transform it into useful shapes and forms, this is your dream job. Couple that with a best-in-class cloud data architecture, and you are off to the races. You will be supporting a data ecosystem that is fueled by 280 million annual visitors to our web and app properties. Combine this with an enterprise-grade events-based Martech stack and the opportunities really are limitless. This role will give you the unbridled freedom to explore  your data passions to connect disparate data sources together to create meaningful business insights and drive rapid product improvement and growth. If you are also an API guru, then we want you...now. 

 

What You'll Do

  • Develop and maintain data-serving performant API services 
  • Design, develop, and maintain microservices for real-time data ingestion and ETL pipelines  
  • Analyze and organize raw data across Xe's Martech stack 
  • Develop and Maintain data lakes 
  • Collaborate with data scientists and architects on several projects 
  • Ensure all components are highly scalable, maintainable, and monitored 
  • Optimize cloud services for security, cost and performance

Who You Are

  • Degree in Computer Science, Software Engineering, or related discipline 
  • Strong proficiency with SQL and its variations amongst popular databases 
  • Strong knowledge of developing and maintaining API services in a cloud environment 
  • Strong object and service-oriented programming skills in Python to write efficient, scalable code 
  • Knowledge of modern containerization techniques - Docker, Docker Compose 
  • A strong interest in data and the insights it can provide to better serve our customers 
  • Experience with relational and unstructured databases and data lakes 
  • An understanding of business goals and how data policies can affect them 
  • Effective communication and collaboration skills 
  • A strong understanding of the concepts associated with privacy and data security 

Perks & Benefits

  • Annual salary increase review
  • End of the year bonus (Christmas bonus)
  • ESPP (Employee Stock Purchase Plan)
  • 15 days vacation per year
  • Paid day off for your birthday
  • Insurance guaranteed for employees ( Health, Oncological , Dental , Life Insurance)
  • No fee when using RIA service/wire transfers

See more jobs at Xe

Apply for this job

18d

Data Engineer

XeBrazil, Remote
sqlDesignmobileapidockerpython

Xe is hiring a Remote Data Engineer

At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments.

At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

XE is looking for an experienced Data Engineer to join the XE Data Science team. If you are easily excited by raw data and the boundless potential to transform it into useful shapes and forms, this is your dream job. Couple that with a best-in-class cloud data architecture, and you are off to the races. You will be supporting a data ecosystem that is fueled by 280 million annual visitors to our web and app properties. Combine this with an enterprise-grade events-based Martech stack and the opportunities really are limitless. This role will give you the unbridled freedom to explore  your data passions to connect disparate data sources together to create meaningful business insights and drive rapid product improvement and growth. If you are also an API guru, then we want you...now. 

 

What You'll Do

  • Develop and maintain data-serving performant API services 
  • Design, develop, and maintain microservices for real-time data ingestion and ETL pipelines  
  • Analyze and organize raw data across Xe's Martech stack 
  • Develop and Maintain data lakes 
  • Collaborate with data scientists and architects on several projects 
  • Ensure all components are highly scalable, maintainable, and monitored 
  • Optimize cloud services for security, cost and performance

Who You Are

  • Degree in Computer Science, Software Engineering, or related discipline 
  • Strong proficiency with SQL and its variations amongst popular databases 
  • Strong knowledge of developing and maintaining API services in a cloud environment 
  • Strong object and service-oriented programming skills in Python to write efficient, scalable code 
  • Knowledge of modern containerization techniques - Docker, Docker Compose 
  • A strong interest in data and the insights it can provide to better serve our customers 
  • Experience with relational and unstructured databases and data lakes 
  • An understanding of business goals and how data policies can affect them 
  • Effective communication and collaboration skills 
  • A strong understanding of the concepts associated with privacy and data security 

Perks & Benefits

  • Annual salary increase review
  • End of the year bonus (Christmas bonus)
  • ESPP (Employee Stock Purchase Plan)
  • 30 days vacation per year
  • Insurance guaranteed for employees ( Health, Oncological , Dental , Life Insurance)
  • No fee when using RIA service/wire transfers

See more jobs at Xe

Apply for this job

18d

Senior Data Engineer

IncreasinglyBengaluru, India, Remote
airflowscrum

Increasingly is hiring a Remote Senior Data Engineer

Job Description

Are you ready to embark on a thrilling data adventure? Buckle up, because we're about to take you on a wild ride through the world of big data!

  • Become the master of NiFi flows! You'll be creating and updating pipelines like a digital wizard, managing a colossal NiFi environment that'll make your head spin (in a good way, of course).
  • Tame the Hive and wrangle Impala in our Cloudera Data Platform jungle. Your table management skills will be put to the test in this data safari!
  • Join our band of merry architects as we build the future of data processing. You'll be jamming with NiFi, Kafka, and other cool cats in our tech orchestra.
  • Put on your detective hat and dive into data analysis. You'll be uncovering insights faster than you can say "Eureka!"
  • Become a Scrum superhero! You'll leap tall backlogs in a single bound and sprint through ceremonies with the agility of a data ninja.
  • Channel your inner polyglot and become the ultimate translator between BAs and bits. Your ability to speak both business and data will make you the life of the tech party!

Get ready to have a blast while pushing the boundaries of data engineering. It's not just a job - it's a data-driven adventure!

Qualifications

  • Experience with cloud technologies as a Data Engineer
  • Technical expertise with Big Data tools and methodologies
  • Experience working with Apache Kafka; - Experience working with Hive and Impala
  • Skills working with Airflow
  • Experience working with Hadoop ecosystem (CDP/Cloudera Data Platform)
  • Hands-on experience with Data Ingestion
  • Proficiency in SQL/PostgreSQL

See more jobs at Increasingly

Apply for this job

25d

Senior Data Engineer

ZegoLondon Area,England,United Kingdom, Remote Hybrid
MLterraformairflowsqlDesigndockerpostgresqlkubernetespythonAWSbackend

Zego is hiring a Remote Senior Data Engineer

About Zego

At Zego, we understand that traditional motor insurance holds good drivers back. It's too complicated, too expensive, and it doesn't reflect how well you actually drive. Since 2016, we have been on a mission to change that by offering the lowest priced insurance for good drivers.

From van drivers and gig workers to everyday car drivers, our customers are the driving force behind everything we do. We've sold tens of millions of policies and raised over $200 million in funding. And we’re only just getting started.

Overview of the Data Engineering team: 

At Zego the Data Engineering team is integral to our data platform, working closely with Software Engineers, Data Scientists and Data Analysts along with other areas of the business. We use a variety of internal and external tooling to maintain our data repositories. We are looking for people who have a solid understanding of ETL and ELT paradigms, are comfortable using Python and SQL, hold an appreciation for good software engineering and data infrastructure principles, are eager to work with complex and fast growing datasets, reflect a strong desire to learn and are able to communicate well.

Our stack involves but is not limited to Airflow, Data Build Tool (DBT), a multitude of AWS services, Stitch and CICD pipelines. As a Data Engineer you will have the opportunity to promote emerging technologies where they can add value to the business and promote better ways of working.

It is an exciting time to join, and you’ll partner with world class engineers, analysts and product managers to help make Zego the best loved insurtech in the world.

Over the next 12 months you will:

  • Assist in developing and maintaining our ETL and ELT pipelines.
  • Support our data scientists in the development and implementation of our ML pricing models and experiments.
  • Help drive and evolve the architecture of our data ecosystem.
  • Collaborate with product managers and across teams to bring new products and features to the market.
  • Help drive data as a product, by growing our data platform with a focus on strong data modelling, quality, usage and efficiency.
  • Build tailored data replication pipelines as our backend application is broken into microservices.

About you

We are looking for somebody with a working knowledge of building data pipelines and the underlying infrastructure. Experience in data warehouse design undertakings, following best practices during implementation is a big plus. You have worked with (or are keen to do so) Data Analysts, Data Scientists and Software Engineers.

Practical knowledge of (or strong desire to learn) the following or similar technologies:

  • Python
  • Airflow
  • Databases (PostgreSQL)
  • Data Warehousing (Redshift / Snowflake)
  • SQL (We use DBT for modelling data in the warehouse)
  • Data Architecture including Dimensional Modelling
  • Experience in using infrastructure as code tools (e.g. Terraform)

Otherwise an interest in learning these, with the support of the team, is essential. We're looking for people with a commitment to building, nurturing, and iterating on an ever-evolving data ecosystem.

Other beneficial skills include:

  • Familiarity with Docker and/or Kubernetes (EKS)
  • Implementation / Contribution to building a Data Lake or Data Mesh
  • Having worked with a wide variety of AWS services
  • Open Table Formats (e.gr. Apache Iceberg)

How we work

We believe that teams work better when they have time to collaborate and space to get things done. We call it Zego Hybrid.

Our hybrid way of working is unique. We don't mandate fixed office days. Instead, we foster a flexible approach that empowers every Zegon to perform at their best. We ask you to spend at least one day a week in our central London office. You have the flexibility to choose the day that works best for you and your team. We cover the costs for all company-wide events (3 per year), and also provide a separate hybrid contribution to help pay towards other travel costs. We think it’s a good mix of collaborative face time and flexible home-working, setting us up to achieve the right balance between work and life.

Benefits

We reward our people well. Join us and you’ll get a market-competitive salary, private medical insurance, company share options, generous holiday allowance, and a whole lot of wellbeing benefits. And that’s just for starters.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, or disability status.

See more jobs at Zego

Apply for this job