Data Engineer Remote Jobs

103 Results

26d

Data Engineer (Contract)

Super DispatchUnited States - Remote
jirasqlqagit

Super Dispatch is hiring a Remote Data Engineer (Contract)

Super Dispatch is looking for a Data Engineer (contract) to focus on creating credible data sources and building compelling stories that drive action. This temporary role is expected to start mid-November to early December, with the end date to be determined.

The Analytics team focuses on automating manual reporting processes, delivering actionable insights to our internal business stakeholders, and providing enterprise-level reporting. We are seeking someone driven by curiosity, energized by problem solving, and passionate about continuous learning in the data analytics community.

Responsibilities:

Own:

  • Be the go-to person for SQL on the analytics team, assisting team members with their queries.
  • Create and manage data sources for company-wide consumption.
  • Contribute to the Analytics team’s GitHub repository by adding clean, QA’d SQL queries.
  • Manage our Redshift instance and user permissions.
  • Plan and write queries to create published data sources for reporting.
  • Understand and document the meaning behind the data to enable business decision-making.
  • Deliver ad-hoc reports and collaborate with other Data team members to streamline and automate processes, ensuring data consistency, integrity, and transparency.
  • Utilize tools (Notion, Git, Jira, etc.) to track project workflow and changes.
  • Build and deploy DBT models.

Assist:

  • Work closely with other internal teams to understand their data needs and create reporting solutions.

Preferred Profile:

Tech:

  • 2+ years of experience using SQL queries, RDBMS query functions (Amazon Redshift, MS SQL Server), and writing custom calculations.
  • 2+ years of experience in Data, Data Analytics, or Business Intelligence, with expertise in data collection, interpretation, analysis, and visualization.
  • Proficiency with data pipeline tools Census, Fivetran, Segment, and DBT for modeling and managing data workflows.
  • Experience with project management and version control tools Notion, Git, and Jira.
  • Strong understanding of business metrics and how to translate them into data solutions.
  • High attention to detail with the ability to create efficient, scalable SQL queries.

Who you are:

  • Analytical thinker with strong problem-solving skills.
  • Detail-oriented with the ability to understand and manage complex data relationships.
  • Motivated by continuous learning and staying current with data analytics trends.
  • Able to work collaboratively across teams to deliver results.

See more jobs at Super Dispatch

Apply for this job

27d

Lead Data Engineer

Full TimeBachelor's degreesqlDesignpython

Talent Inc. is hiring a Remote Lead Data Engineer

Lead Data Engineer - Talent Inc. - Career Page

See more jobs at Talent Inc.

Apply for this job

29d

Data Engineer

DevOPSsqlDesignazurepython

Sunscrapers Sp. Openings is hiring a Remote Data Engineer

Are you ready to take the challenge?

We’re looking for aData Engineer to join our team in Warsaw or remotely.

Advance your career with Sunscrapers, a leading force in software development, now expanding its presence in a data-centric environment. Join us in our mission to help clients grow and innovate through a comprehensive tech stack and robust data-related projects. Enjoy a competitive compensation package that reflects your skills and expertise while working in a company that values ambition, technical excellence, trust-based partnerships, and actively supports contributions to R&D initiatives.

As a Data Engineer you’ll play a pivotal role in building a robust data platform that drives engagement solutions for leading healthcare brands. Your work will directly support the development of omnichannel digital health experiences, empowering consumers to access the programs, benefits, and care they need.

In this role, you’ll leverage cutting-edge technologies such as Apache Spark, Databricks, and Delta Tables to design and implement scalable data solutions. You’ll enable actionable insights by integrating diverse data sources, building efficient pipelines, and supporting data-driven decision-making.

Your responsibilities will include:

  • Design and optimize data infrastructure using Python, PySpark, Apache Spark, and Delta Spark,
  • Implement strong data governance frameworks to ensure quality, security, and compliance,
  • Connect Delta Tables to a SQL engine (like Databricks SQL) for efficient querying and analytics,
  • Leverage strong DevOps expertise to deploy and maintain data systems in Azure,
  • Create batch and streaming pipelines for data processing.

What's important for us?

  • At least 3 years of professional experience as a data engineer,
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar,
  • Excellent command of spoken and written English (at least C1),
  • Experience in designing data infrastructure using Python, PySpark, Apache Spark, and Delta Spark,
  • Experience in managing production spark clusters either in Databricks,
  • Proficiency in SQL and experience with Delta Lake architectures,
  • Great analytical skills and attention to detail - asking questions and proactively searching for answers,
  • Creative problem-solving skills,
  • Great customer service and troubleshooting skills.

You'll score extra points for:

  • Familiarity with CI/CD pipelines and containerization (Docker, Kubernetes),
  • Experience with real-time data tools like Kafka or Azure Event Grid,
  • Experience with BigQuery,
  • Experience in managing data governance in the healthcare space.

What do we offer?

  • Working alongside a talented team that’s changing the image of Poland abroad.
  • Flexible working hours and remote work possibility.
  • Comfortable office in a penthouse in central Warsaw equipped with all the necessary tools to conquer the universe (Macbook, external screen, ergonomic chairs).
  • Fully equipped kitchen with fruit, hot and cold drinks.
  • Multisport card & Private medical care.
  • Culture of good feedback: evaluation meetings, mentoring.
  • We value and appreciate our engineers eagerness to learn and improve so we strongly encourage and support their growth!

Sounds like a perfect place for you? Don’t hesitate to click apply and submit your application today!

See more jobs at Sunscrapers Sp. Openings

Apply for this job

29d

Senior Data Engineer

Sunscrapers Sp. OpeningsWarsaw,Masovian Voivodeship,Poland, Remote
Design

Sunscrapers Sp. Openings is hiring a Remote Senior Data Engineer

Are you ready to take the challenge?

We’re looking for a Senior Data Engineer to join our team in Warsaw or remotely.

Advance your career with Sunscrapers, a leading force in software development, now expanding its presence in a data-centric environment. Join us in our mission to help clients grow and innovate through a comprehensive tech stack and robust data-related projects. Enjoy a competitive compensation package that reflects your skills and expertise while working in a company that values ambition, technical excellence, trust-based partnerships, and actively supports contributions to R&D initiatives.

As a Senior Data Engineer you’ll design and implement a system supporting the decision process for the US-based private investment firm. You’ll need to integrate data from multiple systems and sources to enable data insights, machine learning and data-driven decision processes. You’ll build integrated data models, data warehouse and data pipelines.

The ideal candidate will be well organized, eager to constantly improve and learn, driven and, most of all - a team player!

Your responsibilities will include:

  • Modeling datasets and schemes for consistency and easy access,
  • Design and implement data transformations and data marts,
  • Integrating third-party systems and external data sources into data warehouse,
  • Building data flows for fetching, aggregation and data modeling using batch pipelines.

What's important for us?

  • At least 5 years of professional experience as a data engineer,
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar,
  • Excellent command in spoken and written English, at least C1,
  • Strong professional experience with Python and SQL,
  • Hands on experience on DBT and Snowflake,
  • Experience in building data pipelines with Airflow or alternative solutions,
  • Strong understanding of various data modeling techniques like Kimball Star Schema,
  • Great analytical skills and attention to detail - asking questions and proactively searching for answers,
  • Creative problem-solving skills,
  • Great customer service and troubleshooting skills.

You will score extra points for:

  • Expertise in AWS or Azure stack,
  • Experience with infrastructure-as-code tools, like Terraform,
  • Devops skills to automate deployment and streamline development,
  • Good understanding of Docker, Kubernetes and AWS EKS.

What do we offer?

  • Working alongside a talented team that’s changing the image of Poland abroad.
  • Flexible working hours and remote work possibility.
  • Comfortable office in a penthouse in central Warsaw equipped with all the necessary tools to conquer the universe (Macbook, external screen, ergonomic chairs).
  • Fully equipped kitchen with fruit, hot and cold drinks.
  • Multisport card & Private medical care.
  • Culture of good feedback: evaluation meetings, mentoring.
  • We value and appreciate our engineers eagerness to learn and improve so we strongly encourage and support their growth!


Sounds like a perfect place for you? Don’t hesitate to click apply and submit your application today!

See more jobs at Sunscrapers Sp. Openings

Apply for this job

30d

Senior Data Engineer

BrazeRemote - Ontario
SalesBachelor's degreeairflowsqlDesignkubernetes

Braze is hiring a Remote Senior Data Engineer

At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.

We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.

To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success. Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.

If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.

WHAT YOU’LL DO

Join our dynamic team dedicated to revolutionizing data infrastructure and products for impactful decision-making at Braze. We collaboratively shape data engineering strategies, optimizing data pipelines and architecture to drive business growth and enhance customer experiences.

Responsibilities:

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage
  • Collaborate closely with data scientists, analysts, and other engineers to design and implement data solutions that drive customer engagement and retention
  • Optimize and manage data flows and integrations across various platforms and applications
  • Ensure data quality, consistency, and governance by implementing best practices and monitoring systems
  • Work extensively with large-scale event-level data, aggregating and processing it to support business intelligence and analytics
  • Implement and maintain data products using advanced techniques and tools
  • Collaborate with cross-functional teams including engineering, product management, sales, marketing, and customer success to deliver valuable data solutions
  • Continuously evaluate and integrate new data technologies and tools to enhance our data infrastructure and capabilities

WHO YOU ARE

The ideal candidate for this role possesses:

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development, preferably in a customer-facing environment
  • Proven expertise in designing and optimizing data pipelines and architectures
  • Strong proficiency in advanced SQL and data modeling techniques
  • A track record of leading impactful data projects from conception to deployment
  • Effective collaboration skills with cross-functional teams and stakeholders
  • In-depth understanding of technical architecture and data flow in a cloud-based environment
  • Ability to mentor and guide junior team members on best practices for data engineering and development
  • Passion for building scalable data solutions that enhance customer experiences and drive business growth
  • Strong analytical and problem-solving skills, with a keen eye for detail and accuracy
  • Extensive experience working with and aggregating large event-level data
  • Familiarity with data governance principles and ensuring compliance with industry regulations
  • Prefer, but don’t require, experience with Kubernetes for container orchestration and Airflow for workflow management

 #LI-Remote

WHAT WE OFFER
 
Details of these benefits plan will be provided if a candidate receives an offer of employment. Benefits may vary by locationfind out more here.
From offering comprehensive benefits to fostering flexible environments, we’ve got you covered so you can prioritize work-life harmony.
  • Competitive compensation that may include equity
  • Retirement and Employee Stock Purchase Plans
  • Flexible paid time off
  • Comprehensive benefit plans covering medical, dental, vision, life, and disability
  • Family services that include fertility benefits and equal paid parental leave
  • Professional development supported by formal career pathing, learning platforms, and tuition reimbursement
  • Community engagement opportunities throughout the year, including an annual company wide Volunteer Week
  • Employee Resource Groups that provide supportive communities within Braze
  • Collaborative, transparent, and fun culture recognized as a Great Place to Work®

ABOUT BRAZE

Braze is a leading customer engagement platform that powers lasting connections between consumers and brands they love. Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty.

Braze is proudly certified as a Great Place to Work® in the U.S., the UK and Singapore. In 2024, we ranked #3 on Great Place to Work UK’s Best Workplaces (Large), #3 on Fortune Best Workplaces for Parents (Small and Medium), #13 on Great Place to Work UK’s Best Workplaces for Development (Large), #14 on Great Place to Work UK’s Best Workplaces for Wellbeing (Large), #14 on Fortune Best Workplaces in Technology (Small and Medium), #26 in Great Place to Work UK’s Best Workplaces for Women (Large), #31 in Fortune Best Workplaces (Medium), and #37 in Fortune Best Workplaces for Women.

We were also featured in the Top 10% of US News & World Best Companies to Work For, Top 100 Great Place to Work UK’s Best Workplaces in Europe (Medium), and in Built In’s Best Places to Work.

You’ll find many of us at headquarters in New York City or around the world in Austin, Berlin, Bucharest, Chicago, Dubai, Jakarta, London, Paris, San Francisco, Singapore, São Paulo, Seoul, Sydney and Tokyo – not to mention our employees in nearly 50 remote locations.

BRAZE IS AN EQUAL OPPORTUNITY EMPLOYER

At Braze, we strive to create equitable growth and opportunities inside and outside the organization.

Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, maternity, national origin, pregnancy, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.

We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.

Please see ourCandidate Privacy Policyfor more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.

See more jobs at Braze

Apply for this job

30d

Senior Data Engineer

SOPHiA GENETICSRolle,Vaud,Switzerland, Remote Hybrid
Design

SOPHiA GENETICS is hiring a Remote Senior Data Engineer

SOPHiA GENETICS (NASDAQ: SOPH) combines Data-Driven Medicine, Genomics and Radiomics, to ensure that the data used to help patients today will also benefit the patients of tomorrow. To help us achieve our ambitious mission, we are now searching for a Senior Data Engineer with Big Data experience to join our team in Rolle, Switzerland.

Why us:

We believe there is a smarter, more data-driven way to make decisions in healthcare and our AI SaaS Platform enables that. Our platform is a one-of-a-kind globally distributed information system that brings together hospitals and labs to provide data ingestion and processing, analysis and modeling, reporting and intelligence, distribution and sharing of a multitude of complex sources of structured and unstructured data, including genomics, imaging, and clinical data, delivered as a multi-tenant SaaS platform on the cloud. 

As a Senior Data Engineer, you will be part of a team of engineers focused on developing and maintaining our core internal data platform and microservices connecting it to all corners of our business. 

Your mission:

Reporting directly to the Head of Data, the Senior Data Engineer will be responsible for the development of the core data platform, its evolution, as well as designing individual components and services, while collaborating daily with senior technical staff. You will have the opportunity to recommend and drive new initiatives and support our fast-growing organization.

The value add:

  • You will have a key role in the development and evolution of our next-gen multimodal data platform, aided by your manager and your team members. This will include design, implementation, testing, documentation, deployment, maintenance and support of the services and other projects owned by your team.
  • You will be responsible for designing and building of individual components and services, as well as contributing to the overall platform architecture. You will collaborate daily with senior technical staff in- and outside of the team
  • You will be expected to participate in Level 3 Support activities.
  • You will actively participate in code and design reviews with other members of the team 
  • As needed for your projects, you will participate in estimations and risk assessments, and exchange with stakeholders in Product and Project Management and other departments.
  • You will participate in the team’s processes and recurring activities while helping to organize them. You will share your knowledge of best practices in the team and mentor junior team members.

You have demonstrated experience in developing reliable and performant data platforms and services while having a firm grasp on the underlying challenges of releasing a distributed data and software solution to production. You have a basic understanding of the domain of genomics and digital healthcare and care for the impact you can have in this field. You know modern data and software engineering processes, have good knowledge of tools, technologies, and best practices. You seek to exchange regularly and communicate effectively with other members of your team. 

The experience you bring:

  • Master’s degree in Computer Science or Engineering or equivalent professional experience
  • At least 4-6 years of experience working with distributed data, data lakes, microservice-oriented architectures, and APIs, ideally in the healthcare field. 
  • Expertise with Python ETLs in a data processing environment, ideally Databricks
  • Expertise with distributed big data architectures (schemas, transfers, storage, partitioning, performance monitoring and optimization)
  • Solid knowledge of modern scalable database and data lake technologies, especially Spark & SQL
  • Experience with containerization and orchestration technologies, as well as basic DevOps processes and tooling
  • Experience with software engineering best-practices, Agile, CI/CD, Unit & integration testing
  • Good interpersonal and communication skills with a growth mindset
  • Tooling: Azure data services ecosystem, Databricks & Unity Catalog, Terraform, Gitlab
  • Experience with multimodal data spanning of digital healthcare, clinical, radiomics and genomics (is a plus)
  • Excellent level of English, French is a plus

You will be joining an organization with the patient at the heart of every decision and action, driven by purpose as we drive exponential growth. 

  • Opportunity to work on cutting-edge research projects with an immediate global impact 
  • A flexible, friendly and international working environment with a collaborative atmosphere 
  • An exciting company mission that brings together science and technology to directly impact the lives of patients with life threatening illness
  • A fast-growing company with plenty of opportunity for personal growth and development 
  • A hard technical challenge to solve with exciting modern technology - cloud computing, Big Data, DevOps, machine learning 

If you’re a dynamic, self-motivated professional who believes nothing is impossible, love to learn and be curious, we’d love to have you as part of our team!

The Process 

Apply now with your CV and any supporting information. All resumes MUST be in English for a successful review. 

Start Date: ASAP 

Location: Rolle, Switzerland (3 days in office)

Contract: Full-Time, Permanent 

See more jobs at SOPHiA GENETICS

Apply for this job

+30d

Senior ETL Data Engineer

AETOSremote, REMOTE, Remote
agileBachelor's degreesqlsalesforceoraclelinux

AETOS is hiring a Remote Senior ETL Data Engineer

Job Description

Aetos LLC is seeking a Senior ETL Data Engineer team member to join an existing team providing Extract, Transform and Load (ETL) solutions to a government client. The ideal individual will have 5+ years of experience with Informatica PowerCenter. Must be responsible for successful technical delivery and support of Data Warehousing, Data Migration and Transformation, and Business Intelligence projects using an Agile project management methodology. The duties of this role will include all aspects of data processing, storage, and ingestion, as well as data analysis and visualization of relative multi-program data. 

Qualifications

Responsibilities:  

ETL/Data Warehouse: 

  • Create, maintain, and reverse engineer the Extract, Transform, and Load (ETL) procedures for the Data Warehouse (DW) environment using the Informatica PowerCenter suite. 
  • Perform analysis of RDBMS tables and PowerCenter objects to answer questions pertaining to the data warehouse and the data transformations. 
  • Create and maintain scripts and files that perform various functions on the Informatica integration servers.  Use Putty or other Unix text editor to maintain Linux environment.  
  • Maintain data model documentation (ERwin) if changes to the ETL require database changes, and develop, test, and deploy associated DDL.  
  • Manage releases of changes to ETL, scripts, DDL, and scheduling components from Development to Test to Production.   
  • Provide support for the Test, Certification, and Production DW environments.  
  • Maintain Consolidated Data Model (CDM).  
  • Any knowledge of Informatica Cloud Integration Services a plus 
  • Provide ongoing development and maintenance of financial data marts and enterprise data warehouse using BI best practices, relational structures, dimensional data, structured query language skills, data warehouse and reporting techniques.  
  • Collaborate with end users to identify needs and opportunities for improved delivery of data supporting agency financial operations and mission.   
  • Convert business requirements and high-level data collection needs into well-specified ETL, analyses, reporting and visualizations.  
  • Define and log work using JIRA.  
  • Participate in recurring team meetings (Agile). 

Education & Qualifications Required:   

  • Bachelor's degree in computer science, Software Engineering, or commensurate experience in a related field.   
  • 5 + years of experience using Informatica PowerCenter at development level (creating mappings, workflows, etc). 
  • 7+ years of relevant experience in ETL development support and maintenance. 
  • Strong SQL (Oracle) abilities. 
  • Proficiency in shell scripting 
  • ETL environment where Salesforce a source a plus 
  • ETL environment where Control-M used a plus 
  • 2+ years of Informatica PowerCenter administration.  If in a Linux environment a plus. 
  • Knowledge or usage of Informatica IICS, EDC, and or AXON a plus.  
  • Excellent analytical, organizational, verbal, and written communication skills.  
  • Experience in gathering requirements and formulating business metrics for reporting.  
  • Familiarity with Erwin data modeling tool.  
  • Experience working in a Microsoft SharePoint environment.  
  • Experience with AGILE and writing User Stories.  
  • Must be able to present diagnostic, troubleshooting steps and conclusions to varied audiences.  
  • Experience monitoring and maintaining enterprise Data Warehouse platforms and BI reporting services.  
  • Banking and lending domain experience a plus.   

See more jobs at AETOS

Apply for this job

+30d

Senior Data Engineer

GeminiRemote (USA)
remote-firstairflowsqlDesigncsskubernetespythonjavascript

Gemini is hiring a Remote Senior Data Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Data

The Role: Senior Data Engineer

As a member of our data engineering team, you'll deliver high quality work while solving challenges that impact the whole or part of the team's data architecture. You'll update yourself with recent advances in Big data space and provide solutions for large-scale applications aligning with team's long term goals. Your work will help resolve complex problems with identifying root causes, documenting the solutions, and implementing Operations excellence (Data auditing, validation, automation, maintainability) in mind. Communicating your insights with leaders across the organization is paramount to success.

Responsibilities:

  • Design, architect and implement best-in-class Data Warehousing and reporting solutions
  • Lead and participate in design discussions and meetings
  • Mentor data engineers and analysts
  • Design, automate, build, and launch scalable, efficient and reliable data pipelines into production using Python
  • Build real-time data and reporting solutions
  • Design, build and enhance dimensional models for Data Warehouse and BI solutions
  • Research new tools and technologies to improve existing processes
  • Develop new systems and tools to enable the teams to consume and understand data more intuitively
  • Partner with engineers, project managers, and analysts to deliver insights to the business
  • Perform root cause analysis and resolve production and data issues
  • Create test plans, test scripts and perform data validation
  • Tune SQL queries, reports and ETL pipelines
  • Build and maintain data dictionary and process documentation

Minimum Qualifications:

  • 5+ years experience in data engineering with data warehouse technologies
  • 5+ years experience in custom ETL design, implementation and maintenance
  • 5+ years experience with schema design and dimensional data modeling
  • Experience building real-time data solutions and processes
  • Advanced skills with Python and SQL are a must
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
  • Strong computer science fundamentals including data structures and algorithms
  • Strong software engineering skills in any server side language, preferable Python
  • Experienced in working collaboratively across different teams and departments
  • Strong technical and business communication

Preferred Qualifications:

  • Kafka, HDFS, Hive, Cloud computing, machine learning, text analysis, NLP & Web development experience is a plus
  • Experience with Continuous integration and deployment
  • Knowledge and experience of financial markets, banking or exchanges
  • Web development skills with HTML, CSS, or JavaScript
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $136,000 - $170,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-JS3

Apply for this job

Databricks is hiring a Remote Big Data engineer

Job Application for Big Data engineer at Databricks

See more jobs at Databricks

Apply for this job

+30d

Senior Data Engineer

Tiger AnalyticsUnited States, Remote

Tiger Analytics is hiring a Remote Senior Data Engineer

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives.

  • Bachelor’s degree in Computer Science or similar field
  • 8+ years of experience in a Data Engineer role
  • Experience with relational SQL and NoSQL databases like MySQL, Postgres
  • Strong analytical skills and advanced SQL knowledge
  • Development of ETL pipelines using Python & SQL
  • Having a good experience with Customer Data Platforms (CDP)
  • Experience in SQL optimization and performance tuning
  • Experience with data modeling and building high-volume ETL pipelines.
  • Working experience with any cloud platform
  • Experience with Google Tag Manager and Power BI is a plus
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
  • Experience extracting/querying/joining large data sets at scale
  • A desire to work in a collaborative, intellectually curious environment
  • Strong communication and organizational skills

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

+30d

Lead Data Engineer

Extreme ReachLondon,England,United Kingdom, Remote Hybrid
DevOPSagileDesign

Extreme Reach is hiring a Remote Lead Data Engineer

XR is a global technology platform powering the creative economy. Its unified platform moves creative and productions forward, simplifying the fragmentation and delivering global insights that drive increased business value. XR operates in 130 countries and 45 languages, serving the top global advertisers and enabling $150 billion in video ad spend around the world. More than half a billion creative brand assets are managed in XR’s enterprise platform. 

Above all, we are a supportive and collaborative culture dedicated to DEI. We are caring, dedicated, positive, genuine, trustworthy, experienced, passionate and fun people with loyalty to our customers and our fellow teammates. It is our belief that the better we work together to help our clients achieve their goals, the more successful XR will be.  

The Opportunity 

We are looking for a motivated and results driven Lead Data Engineer to join our Development Team; responsible for designing, and managing the infrastructure and data systems that power analytics and business intelligence within an organization including, but not limited to, Lake House architecture and solution development, performance optimization, data feeds development, and opportunities to contribute to Machine Learning & AI initiatives. This role blends advanced technical skills with leadership capabilities to drive the development and integration solutions at scale. You will contribute to bringing the product up to modern cloud and tool stack. You will play a crucial role in collaborating and managing cross-functional relationships to ensure seamless integration and alignment of data initiatives and translate business requirements into technical solutions. 

Job Responsibilities: 

  • Lead the design and implementation of data lake architecture based on variety of technologies such as Databricks, Exasol, S3. 
  • Take accountability and ownership for deploying technical frameworks, processes and best practices which allow engineers of all levels to build extensible, performant and maintainable solutions. 
  • Manage cross-team and stakeholder relationships to drive collaboration and meet shared goals. 
  • Design and implement scalable, reliable, and high-performance data architectures to support large-scale data processing and machine learning workflows. 
  • Architect and develop end-to-end data pipelines, including data extraction, transformation, and loading (ETL) processes. 
  • Optimize data pipelines and storage solutions for performance, scalability, and cost efficiency.  
  • Design the process for monitoring and troubleshooting of data infrastructure issues, identifying performance bottlenecks and ensuring high uptime. 
  • Utilize containerized, serverless architecture patterns in system design; 
  • Promote and drive automated testing, DevOps & CI/CD methodologies to work successfully within an agile environment. 
  • Ensure that data governance, privacy, and security policies are adhered to, in compliance with industry standards and regulations (e.g., GDPR, etc). 
  • Lead, mentor, and support a team of data engineers, providing guidance and support for their technical development. 
  • Collaborate with global cross-functional teams including DevOps, security teams and business stakeholders. 
  • Collaborate with data scientists and machine learning engineers to ensure seamless integration with AI/ML projects. 
  • Stay current with emerging data technologies and trends, evaluating and implementing new tools, frameworks, and platforms to improve the data engineering workflows. 
  • Foster a culture of continuous improvement, encouraging innovation and the adoption of modern tools and best practices in data engineering. 

  • MS/BS in Computer Science or related background is essential; 
  • Significant hands-on experience (7+ years) in data engineering, with 2+ years in lead or senior technical role; 
  • Proficiency with Python and SQL is essential; 
  • Proficiency with Spark is essential;  
  • Proven track record of successfully managing large-scale data architectures; 
  • Strong expertise in designing and managing data lakes, data warehouses, data modelling, ETL processes, and database design; 
  • Strong leadership and mentoring skills to guide and develop junior team members; 
  • Experience with shell scripting, system diagnostic and automation tooling; 
  • Experience with various database technologies (MS SQL, Postgres, MySQL) including database performance optimization (e.g., indexing, query optimization); 
  • Experience with No-SQL technologies; 
  • Experience with cloud services (AWS); 
  • Proven experience in implementing DevOps practices; 
  • Experience implementing data quality and code quality practices; 
  • Experience with various programming languages (Java, Scala, Javascript, etc) is beneficial; 
  • Proficiency with infrastructure as a code, code automation, CI/CD is beneficial; 
  • Experience in data governance and compliance is beneficial; 
  • Experience with Docker and containers is desirable; 
  • Experience in visualization tools such PowerBI is desirable; 
  • Excellent interpersonal skills with the ability to collaborate and communicate effectively across diverse teams; 
  • Strong problem solving, organization and analytical skills; 
  • Ability to manage competing priorities, handle complexity, and drive projects to completion; 
  • Keen eye for detail. 

See more jobs at Extreme Reach

Apply for this job

+30d

Senior Data Engineer

Nile BitsCairo, Egypt, Remote
agileairflowsqlDesigndockerlinuxpythonAWS

Nile Bits is hiring a Remote Senior Data Engineer

Job Description

  • Designing and implementing core functionality within our data pipeline in order to support key business processes
  • Shaping the technical direction of the data engineering team
  • Supporting our Data Warehousing approach and strategy
  • Maintaining our data infrastructure so that our jobs run reliably and at scale
  • Taking responsibility for all parts of the data ecosystem, including data governance, monitoring and alerting, data validation, and documentation
  • Mentoring and upskilling other members of the team

Qualifications

  • Experience building data pipelines and/or ETL processes
  • Experience working in a Data Engineering role
  • Confident writing performant and readable code in Python, building upon the rich Python ecosystem wherever it makes sense to do so.
  • Good software engineering knowledge & skills: OO programming, design patterns, SOLID design principles and clean code
  • Confident writing SQL and good understanding of database design.
  • Experience working with web APIs.
  • Experience leading projects from a technical perspective
  • Knowledge of Docker, shell scripting, working with Linux
  • Experience with a cloud data warehouse
  • Experience in managing deployments and implementing observability and fault tolerance in cloud based infrastructure (i.e. CI/CD, Infrastructure as Code, container-based infrastructure, auto-scaling, monitoring and alerting)
  • Pro-active with a self-starter mindset; able to identify elegant solutions to difficult problems and able to suggest new and creative approaches.
  • Analytical, problem-solving and an effective communicator; leveraging technology and subject matter expertise in the business to accelerate our roadmap.
  • Able to lead technical discussions, shape the direction of the team, identify opportunities for innovation and improvement
  • Able to lead and deliver projects, ensuring stakeholders are kept up-to-date through regular communication
  • Willing to support the rest of the team when necessary, sharing knowledge and best practices, documenting design decisions, etc.
  • Willing to step outside your comfort zone to broaden your skills and learn new technologies.
  • Experience working with open source orchestration frameworks like Airflow or data analytics tools such as dbt
  • Experience with AWS services or those of another cloud provider
  • Experience with Snowflake
  • Good understanding of Agile

See more jobs at Nile Bits

Apply for this job

+30d

Data Engineer

In All Media IncArgentina - Remote
DevOPSmongodbapidockerkubernetespythonAWSbackend

In All Media Inc is hiring a Remote Data Engineer

Data Analyst Engineer

The objective of this project and role is to focus on delivering the solution your business partners need to grow the business, e.g. an application, an API, a rules engine, or a Data Pipeline. You know what it takes to deliver the best possible, within the given deadline

Deliverables
Tool called conversion, delivering recommendations on this tool , solving technical debt updates and maintaining add net new recommendations, we can directly measure these recommendations by count and impact (example - how many more features were adopted)
CS - simplify data on active points and deliver the best recommendations, more net new


    Requirements:

      Technologies Backend * Python * Flask * SQLAlchemy * PyMySQL * MongoDB * Internal SOA libraries * Healthcheck tools * Tracing tools DevOps * GitLab CI/CD * Docker * Kubernetes * AWS We're seeking a talented Software Engineer Level 2 to join our dynamic team responsible for developing a suite of innovative tools. These tools are essential in automating and streamlining communication processes with our clients. If you are passionate about solving complex problems and improving user experiences, we want you on our team.

    See more jobs at In All Media Inc

    Apply for this job

    +30d

    Data Engineer

    Charlotte TilburyLondon,England,United Kingdom, Remote Hybrid
    terraformairflowsqlDesigngitpythonAWSjavascript

    Charlotte Tilbury is hiring a Remote Data Engineer

    About Charlotte Tilbury Beauty

    Founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, Charlotte Tilbury Beauty has revolutionised the face of the global beauty industry by de-coding makeup applications for everyone, everywhere, with an easy-to-use, easy-to-choose, easy-to-gift range. Today, Charlotte Tilbury Beauty continues to break records across countries, channels, and categories and to scale at pace.

    Over the last 10 years, Charlotte Tilbury Beauty has experienced exceptional growth and is one of the most talked about brands in the beauty industry and beyond. It has become a global sensation across 50 markets (and growing), with over 2,300 employees globally who are part of the Dream Team making the magic happen.

    Today, Charlotte Tilbury Beauty is a truly global business, delivering market-leading growth, innovative retail and product launches fuelled by industry-leading tech — all with an internal culture of embracing challenges, disruptive thinking, winning together, and sharing the magic. The energy behind the bran­d is infectious, and as we grow, we are always looking for extraordinary talent who want to be part of this our success and help drive our limitless ambitions.

    The Role

     

    Data is at the heart of our strategy to engage and delight our customers, and we are determined to harness its power to go as far as we can to deliver a euphoric, personalised experience that they'll love. 

     

    We're seeking a skilled and experienced Data Engineer to join our Data function to join our team of data engineers in the design, build & maintenance of the pipelines that support this ambition. The ideal candidate will not only be able to see many different routes to engineering success, but also to work collaboratively with Engineers, Analysts, Scientists & stakeholders to design & build robust data products to meet business requirements.

     

    Our stack is primarily GCP, with Fivetran handling change detection capture, Google Cloud Functions for file ingestion, Dataform & Composer (Airflow) for orchestration, GA & Snowplow for event tracking and Looker as our BI Platform. We use Terraform Cloud to manage our infrastructure programmatically as code.

     

    Reporting Relationships

     

    This role will report into the Lead Data Engineer

     

    About you and attributes we're looking for



    • Extensive experience with cloud data warehouses and analytics query engines such as BigQuery, Redshift or Snowflow and a good understanding of cloud technologies in general. 
    • Proficient in SQL, Python and Git 
    • Prior experience with HCL (Terraform configuration language), YAML, JavaScript, CLIs and Bash.
    • Prior experience with serverless tooling e.g. Google Cloud Functions, AWS Lambdas, etc.
    • Familiarity with tools such as Fivetran and Dataform/DBT 
    • Bachelor's or Master's degree in Computer Science, Data Science, or related field 
    • Collaborative mindset and a passion for sharing ideas & knowledge
    • Demonstrable experience developing high quality code in the retail sector is a bonus

    At Charlotte Tilbury Beauty, our mission is to empower everybody in the world to be the most beautiful version of themselves. We celebrate and support this by encouraging and hiring people with diverse backgrounds, cultures, voices, beliefs, and perspectives into our growing global workforce. By doing so, we better serve our communities, customers, employees - and the candidates that take part in our recruitment process.

    If you want to learn more about life at Charlotte Tilbury Beauty please follow ourLinkedIn page!

    See more jobs at Charlotte Tilbury

    Apply for this job

    +30d

    Data Engineer

    LegalistRemote
    agilenosqlsqlDesignc++dockerkubernetesAWS

    Legalist is hiring a Remote Data Engineer

    Intro description:

    Legalist is an institutional alternative asset management firm. Founded in 2016 and incubated at Y Combinator, the firm uses data-driven technology to invest in credit assets at scale. We are always looking for talented people to join our team.

    As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth and strategy of our data pipeline. In this position, you will be joining the data engineering team in an effort to take our data pipeline to the next level.

    Where you come in:

    • Design and develop scalable data pipelines to collect, process, and analyze large volumes of data efficiently.
    • Collaborate with cross-functional teams including data scientists, software engineers, and product managers to understand data requirements and deliver solutions that meet business needs.
    • Develop ELT processes to transform raw data into actionable insights, leveraging tools and frameworks such as Airbyte, BigQuery, Dagster, DBT or similar technologies.
    • Participate in agile development processes, including sprint planning, daily stand-ups, and retrospective meetings, to deliver iterative improvements and drive continuous innovation.
    • Apply best practices in data modeling and schema design to ensure data integrity, consistency, and efficiency.
    • Continuously monitor and optimize data pipelines and systems for performance, availability, scalability, and cost-effectiveness.

    What you’ll be bringing to the team:

    • Bachelor’s degree (BA or BS) or equivalent.
    • A minimum of 2 years of work experience in data engineering or similar role.
    • Advanced SQL knowledge and experience working with a variety of databases (SQL, NoSQL, Graph, Multi-model).
    • A minimum of 2 years professional experience with ETL//ELT, data modeling and Python.
    • Familiarity with cloud environments like GCP, AWS, as well as cloud solutions like Kubernetes, Docker, BigQuery, etc.
    • You have a pragmatic, data-driven mindset and are not dogmatic or overly idealistic about technology choices and trade-offs.
    • You have an aptitude for learning new things quickly and have the confidence and humility to ask clarifying questions.

    Even better if you have, but not necessary:

    • Experience with one or more of the following: data processing automation, data quality, data warehousing, data governance, business intelligence, data visualization.
    • Experience working with TB scale data.

    See more jobs at Legalist

    Apply for this job

    +30d

    Senior ML & Data Engineer

    XeBrazil, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    Perks & Benefits

    • Annual salary increase review
    • End of the year bonus (Christmas bonus)
    • ESPP (Employee Stock Purchase Plan)
    • 30 days vacation per year
    • Insurance guaranteed for employees ( Health, Oncological , Dental , Life Insurance)
    • No fee when using RIA service/wire transfers

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    +30d

    Senior ML & Data Engineer

    XeChile, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    Perks & Benefits

    • Annual salary increase review
    • End of the year bonus (Christmas bonus)
    • ESPP (Employee Stock Purchase Plan)
    • Paid day off for birthday
    • 15 days vacation per year
    • Insurance guaranteed for employees (Health, Oncological, Dental, Life)
    • No fee when using Ria service/wire transfers

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    +30d

    Senior ML & Data Engineer

    XeEl Salvador, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    Multi Media is hiring a Remote Lead Data Platform Engineer

    About us: Multi Media LLC is a leader in digital innovation, focusing on creating modern products for the content creator community. Our main platform, Chaturbate, is a key player in the adult entertainment industry, bringing billions of people together worldwide. We aim to make Chaturbate the best place for users and creators to interact and connect, offering a safe, creative, and engaging space for everyone.

    We’re looking for a Lead Data Platform Engineer to help us scale a data platform that processes petabytes of data to support our analytics, data science, and machine learning teams. In this role, you’ll lead the team that handles the entire ETL, data warehousing, and data governance processes, and you will also focus on architectural decisions for greenfield projects. In addition to that, you will coordinate closely with the head of engineering, head of product, machine learning, and analytics teams to build roadmaps, prioritize work, and drive team success.

    In particular, you will: 

    • Lead and provide ongoing mentorship and support to the data platform team; build a culture that prepares them for high-impact contributions and encourages their professional growth.
    • Administer and manage Snowflake and other data infrastructure.
    • Serve as a subject-matter expert and decision-maker for data security, governance, and performance. 
    • Collaborate with analytics and machine learning teams to ensure they have the tools and infrastructure to deliver game-changing data products. 
    • Manage team’s daily operations. 

    About you:

    • Proven technical leadership or management experience in the areas of data engineering, analytics engineering, or data infrastructure.
    • Deep expertise in ETL architecture and infrastructure tools. 
    • Excellent knowledge of SQL and data modeling tools. 
    • Excellent knowledge of Snowflake platform. 

    Nice to have: 

    • Experience with DBT (data build tool), GCP, and AWS.
    • Expertise in DataOps and security. 

    What you’ll get:

    • Fair and competitive base salary
    • Fully Remote Optional
    • Health, Vision, Dental, and Life Insurance for you and any dependents, with policy premiums covered by the Company
    • Long & Short term disability insurance
    • Unlimited PTO
    • Annual Year-End Company Closure
    • Optional 401k with 5% matching
    • 12 Paid Holidays
    • Paid Lunches in-office, or if Remote, a $125/week stipend via Sharebite
    • Employee Assistance and Employee Recognition Programs
    • And much more!

    The Base Salary range for this position is $180,000 to $215,000 USD. This range reflects base salary only and does not include additional compensation or benefits. The range displayed reflects the minimum and maximum range for a new hire across the US for the posted position. A candidate’s specific pay will be determined on a case-by-case basis and may vary based on the candidate’s job-related skills, relevant education, training, experience, certifications, and abilities of the candidate, as well as other factors unique to each candidate.

    Multi Media, LLC is an equal opportunity employer and strives for diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage people from underrepresented groups to apply!

    See more jobs at Multi Media

    Apply for this job

    +30d

    Senior Data Engineer

    ecobeeRemote in Canada
    sqlDesignpython

    ecobee is hiring a Remote Senior Data Engineer

    Hi, we are ecobee. 

    ecobee introduced the world’s first smart Wi-Fi thermostat to help millions of consumers save money, conserve energy, and bring home automation into their lives. That was just the beginning. We continue our pursuit to create technology that brings peace of mind into the home and allows people to focus on the moments that matter most. We take pride in making a meaningful difference to the environment, all while being part of the exciting, connected home revolution. 

    In 2021, ecobee became a subsidiary of Generac Power Systems.Generac introduced the first affordable backup generator and later created the category of automatic home standby generator. The company is committed to sustainable, cleaner energy products poised to revolutionize the 21st century electrical grid. Together,we take pride in making a meaningful difference to the environment.

    Why we love to do what we do: 

    We’re helping build the world of tomorrow with solutions that improve everyday life while making a positive impact on the planet. Our products and services work in harmony to provide comfort, efficiency, and peace of mind for millions of homes and businesses. While we’re proud of what we’ve done so far, there’s still a lot we can do—and you can be part of it.  

    Join our extraordinary team. 

    We're a rapidly growing global tech company headquartered in Canada, in the heart of downtown Toronto, with a satellite office in Leeds, UK (and remote ecopeeps in the US). We get to work with some of North America and UK's leading professionals. Our colleagues are proud to bring their authentic selves to work, confident that what we do is grounded in a greater purpose. We’re always looking for curious, talented, and passionate people to join our team.

    This role is open to being 100% remote within Canada while our home office is located in Toronto, Ontario. You may be required to travel to Toronto once per quarter for team and/or company events.

    Who You’ll Be Joining: 

    You will be part of the dynamic data engineering and machine learning services focused group at ecobee focused on leveraging data to enhance the smart home experience for customers. This team is responsible for building and maintaining the data infrastructure and machine learning capabilities that power intelligent features across ecobee’s product ecosystem, such as integrated AI services, energy optimization, home automation, personalized climate control, predictive maintenance.

    How You’ll Make an Impact:   

    • Design, build, and maintain scalable and efficient ETL/ELT pipelines for both batch and real-time data ingestion and transformation.
    • Implement data extraction and processing solutions to support analytics, machine learning, and operational use cases.
    • Integrate diverse data sources, including IoT device data, third-party APIs, and internal systems, into centralized data repositories.
    • Develop and maintain data warehousing solutions and ensure data is structured and available for downstream analytics.
    • Monitor and optimize data workflows and infrastructure to ensure high performance and reliability.
    • Implement monitoring, alerting, and logging for data pipelines to proactively identify and resolve issues.
    • Collaborate with data scientists, analysts, product managers, and other engineering teams to understand data requirements and deliver high-quality data solutions.
    • Translate business requirements into technical specifications and provide guidance on data engineering best practices.
    • Implement data quality checks, validation, and cleansing procedures to ensure data integrity and accuracy.
    • Create and maintain comprehensive documentation for data pipelines, architectures, and processes.
    • Share knowledge and best practices with the team, and contribute to the growth and development of the data engineering community within the organization.
    • Architect and implement sophisticated data pipelines that handle massive IoT data streams, ensuring data quality, consistency, and low-latency processing.
    • Introduce frameworks and best practices for feature engineering, data versioning, and experimentation in collaboration with machine learning teams.

    What You’ll Bring to the Table:    

    • Proficiency in building data pipelines using Python, SQL, and tools like Apache Spark, Apache Kafka, and Apache Airflow.
    • Experience with cloud-based data platforms (GCP preferred), including services like Big Query, Big Table, and Dataflow
    • Familiarity working with SQL based operational databases like
    • Familiarity with data processing and storage solutions tailored for machine learning workflows.
    • Good understanding of the machine learning lifecycle and experience in supporting data preparation, feature engineering, and model deployment processes.
    • Experience working with machine learning frameworks and libraries is a plus.
    • Strong experience in data modeling, schema design, and optimization for data warehousing and data lake solutions.
    • Experience with designing data solutions that support both batch and real-time processing requirements.
    • Excellent communication skills, with the ability to work effectively in a collaborative environment and convey technical concepts to non-technical stakeholders.
    • Proven track record of working in cross-functional teams and driving alignment between technical and business goals.

    Just so you know:The hired candidate will be required to complete a background check

    What happens after you apply:   

    Application review. It will happen. By an actual person in Talent Acquisition. We get upwards of 100+ applications for some roles, it can take a few days, but every applicant can expect a note regarding their application status.  

    Interview Process (4 stages):  

    • A 30-minute phone call with a member of Talent Acquisition  
    • A 45 minute call with the Director of Data Engineering and Machine Learning services focused on behavioural, situational and culture fit questions
    • A 90 minute virtual interview with a cross-functional group of engineers - this will be technical interview where you will be presented with a case study to solve a real life problem. This interview will test your design and coding skills necessary to succeed in this position.
    • The final interview will be a 45 minute interview with leadership.

    With ecobee, you’ll have the opportunity to: 

    • Be part of something big: Get to work in a fresh, dynamic, and ever-growing industry.  
    • Make a difference for the environment: Make a sustainable impact while on your daily job, and after it through programs like ecobee acts. 
    • Expand your career: Learn with our in-house learning enablement team, and enjoy our generous professional learning budget. 
    • Put people first: Benefit from competitive salaries, health benefits, and a progressive Parental Top-Up Program (75% top-up or five bonus days off). 
    • Play a part on an exceptional culture: Enjoy a fun and casual workplace with an open concept office, located at Queens Quay W & York St.ecobeeLeeds is based at our riverside office on the Calls. 
    • Celebrate diversity: Be part of a truly welcoming workplace. We offer a mentorship program and bias training.  

    Are you interested? Let's make it work. 

    Our people are empowered to take ownership of their schedules with workflows that allow for flexible hours. Based on your job, you have an option of a office-based, fully remote, or hybrid work environment. New team members working remotely, will have all necessary equipment provided and shipped to them, and we conduct our interviews and onboarding sessions primarily through video.

    We’re committed to inclusion and accommodation. 

    ecobee believes that openness and diversity make us better. We welcome applicants from all backgrounds to apply regardless of race, gender, age, religion, identity, or any other aspect which makes them unique. Accommodations can be made upon request for candidates taking part in all aspects of the selection process. Our recruitment team is happy to answer any questions candidates may have about virtual interviewing, onboarding, and future work locations.

    We’re up to incredible things. Come and be part of them. 

    Discover our products and services and learn more about who we are.  

    Ready to join ecobee? View current openings. 

    Please note, ecobee does not accept unsolicited resumes.  

    Apply for this job