Data Engineer Remote Jobs

105 Results

16d

Data Engineer

MethodsUnited Kingdom, Remote
Design

Methods is hiring a Remote Data Engineer

Methods Analytics exists to improve society by helping people make better decisions with data. Combining passionate people, sector-specific insight and technical excellence to provide our customers an end-to-end data service. We use a collaborative, creative and user centric approach data to do good and solve difficult problems. We ensure that our outputs are transparent, robust and transformative.

We value discussion and debate as part of our approach. We will question assumptions, ambition and process – but do so with respect and humility. We relish difficult problems, and overcome them with innovation, creativity and technical freedom to help us design optimum solutions. Ethics, privacy and quality are at the heart of our work and we will not sacrifice these for outcomes. We treat data with respect and use it only for the right purpose. Our people are positive, dedicated and relentless. Data is a vast topic, but we strive for interactions that are engaging, informative and fun in equal measure.

Methods Analytics was acquired by the Alten Group in early 2022.

Purpose of the Role:

Methods Analytics (MA) is recruiting for a Data Engineer to join our team within the Public Sector Business unit on a permanent basis.

This role will be mainly remote but require flexibility to travel to client sites, and our offices based in London, Sheffield, and Bristol.

  • Work closely with cross-functional teams, translating complex technical concepts into clear, accessible language for non-technical audiences and aligning data solutions with business needs.
  • Collaborate with a dynamic delivery team on innovative projects, transforming raw data into powerful insights that shape strategic decisions and drive business transformation.
  • Utilise platforms and tools such as Microsoft Fabric, Azure Data Factory, Azure Synapse, Databricks, and PowerBI to build robust, scalable, and future-proof end-to-end data solutions.
  • Design and implement efficient ETL and ELT pipelines, ensuring seamless integration and transformation of data from various sources to deliver clean, reliable data.
  • Develop and maintain sophisticated data models, employing dimensional modelling techniques to support comprehensive data analysis and reporting.
  • Implement and uphold best practices in data governance, security, and compliance, using tools like Azure Purview, Unity Catalog, and Apache Atlas to maintain data integrity and trust.
  • Ensure data quality and integrity through meticulous attention to detail and rigorous QA processes, continually refining and optimising data queries for performance and cost-efficiency.
  • Develop intuitive and visually compelling Power BI dashboards that provide actionable insights to stakeholders across the organisation.
  • Monitor and tune solution performance, identifying opportunities for optimisation to enhance the reliability, speed, and functionality of data systems.
  • Stay ahead of industry trends and advancements, continuously enhancing your skills and incorporating the latest Data Engineering tools, languages, and methodologies into your work.

Essential Skills and Experience:

  • Proficiency in SQL and Python: You are highly proficient in SQL and Python, enabling you to handle complex data problems with ease.
  • Understanding of Data Lakehouse Architecture: You have a strong grasp of the principles and implementation of Data Lakehouse architecture.
  • Hands-On Experience with Spark-Based Solutions: You possess experience with Spark-based platforms like Azure Synapse, Databricks, Microsoft Fabric, or even on-premise Spark clusters, using PySpark or Spark SQL to manage and process large datasets.
  • Expertise in Building ETL and ELT Pipelines: You are skilled in building robust ETL and ELT pipelines, mostly in Azure, utilising Azure Data Factory and Spark-based solutions to ensure efficient data flow and transformation.
  • Efficiency in Query Writing: You can craft and optimise queries to be both cost-effective and high-performing, ensuring fast and reliable data retrieval.
  • Experience in Power BI Dashboard Development: You possess experience in creating insightful and interactive Power BI dashboards that drive business decisions.
  • Proficiency in Dimensional Modelling: You are adept at applying dimensional modelling techniques, creating efficient and effective data models tailored to business needs.
  • CI/CD Mindset: You naturally work within Continuous Integration and Continuous Deployment (CI/CD) environments, ensuring automated builds, deployments, and unit testing are integral parts of your development workflow.
  • Business Requirements Translation: You have a knack for understanding business requirements and translating them into precise technical specifications that guide data solutions.
  • Strong Communication Skills: Ability to effectively translate complex technical topics into clear, accessible language for non-technical audiences
  • Continuous Learning and Development: Commitment to continuous learning and professional development, staying up to date with the latest industry trends, tools, and technologies.

Your Impact:

  • Enable business leaders to make informed decisions with confidence by providing them with timely, accurate, and actionable data insights.
  • Be at the forefront of data innovation, driving the adoption and understanding of modern tooling, architectures, and platforms.
  • Deliver seamless and intuitive data solutions that enhance the user experience, from real-time streaming data services to interactive dashboards.
  • Play a key role in cultivating a data-driven culture within the organisation, mentoring team members, and contributing to the continuous improvement of the Engineering Practice.

Desirable Skills and Experience:

  • Exposure to Microsoft Fabric: Familiarity with Microsoft Fabric and its capabilities would be a significant advantage.
  • Experience with High-Performance Data Systems: Handling large-scale data systems with high performance and low latency, such as managing 1 billion+ records or terabyte-sized databases.
  • Knowledge of Delta Tables or Apache Iceberg: Understanding and experience with Delta Tables or Apache Iceberg for managing large-scale data lakes efficiently.
  • Knowledge of Data Governance Tools: Experience with data governance tools like Azure Purview, Unity Catalog, or Apache Atlas to ensure data integrity and compliance.
  • Exposure to Streaming/Event-Based Technologies: Experience with technologies such as Kafka, Azure Event Hub, and Spark Streaming for real-time data processing and event-driven architectures.
  • Understanding of SOLID Principles: Familiarity with the SOLID principles of object-oriented programming.
  • Understanding of Agile Development Methodologies: Familiarity with iterative and agile development methodologies such as SCRUM, contributing to a flexible and responsive development environment.
  • Familiarity with Recent Innovations: Knowledge of recent innovations such as GenAI, RAG, and Microsoft Copilot, as well as certifications with leading cloud providers and in areas of data science, AI, and ML.
  • Experience with Data for Data Science/AI/ML: Experience working with data tailored for data science, AI, and ML applications,
  • Experience with Public Sector Clients: Experience working with public sector clients and understanding their specific needs and requirements.

This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected . Details of this will be discussed with you at interview.

Methods Analytics is passionate about its people; we want our colleagues to develop the things they are good at and enjoy.

By joining us you can expect

  • Autonomy to develop and grow your skills and experience
  • Be part of exciting project work that is making a difference in society
  • Strong, inspiring and thought-provoking leadership
  • A supportive and collaborative environment

As well as this, we offer:

  • Development access to LinkedIn Learning, a management development programme and training
  • Wellness 24/7 Confidential employee assistance programme
  • Social – Breakfast Tuesdays, Thirsty Thursdays and Pizza on the last Thursday of each month as well as commitment to charitable causes
  • Time off 25 days a year
  • Pension Salary Exchange Scheme with 4% employer contribution and 5% employee contribution
  • Discretionary Company Bonus based on company and individual performance
  • Life Assurance of 4 times base salary
  • Private Medical Insurance which is non-contributory (spouse and dependants included)
  • Worldwide Travel Insurance which is non-contributory (spouse and dependants included)

See more jobs at Methods

Apply for this job

18d

Lead Data Engineer

ENDNewcastle Upon Tyne, GB - Remote
agilesqlDynamicsDesignazureAWS

END is hiring a Remote Lead Data Engineer

Recognised as one of the fastest growing Companies in the UK, it’s a really exciting time to be joining END. If you’re positive, passionate and dedicated and want to be part of our future success this could be the role for you.

LEAD DATA ENGINEER – WASHINGTON

Over the last 19 years, END. has evolved into a technology led retailer that provides luxury and contemporary apparel and exclusive sneaker drops to a global audience. One of the most influential, forward-thinking and inspirational fashion companies in the world, we have fresh products hitting our website daily and our service never stops.

END. prides itself on delivering a first-class customer experience, which has underpinned our success. With over 2 million customers we deliver to over 80 countries around the world and our online business is complimented by our industry leading retail stores in Newcastle, Glasgow, London & Milan.

We currently have an exciting opportunity in our IT Operations Team for a Lead Data Engineer. Working in a busy and forward-thinking team, we are looking for someone who can effectively lead the Data Engineering Team to ensure all developments, integrations and environments are functional, available, value add and ultimately contribute to the overall business objectives..

What you’ll be doing:

Key responsibilities

  • Growth of a new in-house data team who are responsible for both project delivery and platform maintenance
  • Investigating, formalising and introducing improvements to the teams’ working processes with minimum disruption to productivity
  • Managing efficient ways of working between Functional and Technical teams
  • Involvement in Technical Solution Design and the Strategic Roadmap for Data
  • Providing insight and guidance on best practices and workflows
  • Prioritising tasks in line with business objectives
  • Working closely with 3rd party vendors to ensure successful integrations
  • Maintaining and improving an Agile development environment (CICD) process in line with project requirements
  • Reviewing data platform capabilities, presenting ideas for improvements, providing effort estimations, and attending project review meetings

What you’ll be able to demonstrate:

Skills and experience

  • Experience of managing and developing a team of skilled report developers
  • Extensive work experience and demonstrable skills in managing a modern cloud data platform such as AWS, GCS, or Azure
  • Comprehensive understanding of the many data architecture concepts available, such as Data Lakes, Data Warehousing, Data Marts etc
  • Experienced in the design and build of data models that are appropriate to the business context
  • Deep understanding of the most common data integration concepts and patterns, including CDC, WebHooks, SOAP/REST APIs, ETL/ELT
  • Ability to take a balanced view across many tooling options, score them fairly, explain the business benefits, and build a compelling business case for implementation
  • Experience with the full development lifecycle i.e. requirements capture, analysis, design, test, documentation, maintenance, and configuration management
  • Strong architectural understanding of the various frameworks, entities and common data service within Dynamics 365 especially including web services
  • Experience of using version and source control to protect, manage and share source code
  • Excellent MS SQL Server skills

What we can offer you

  • Competitive salary
  • 34 days holiday (including bank holidays and your birthday)
  • Company pension scheme
  • Generous staff discount
  • Access to Employee Assistance Programme
  • Registered access to Healthcare Benefits provider
  • Opportunities for professional development and career progression
  • Eye-test vouchers
  • Cycle-to-work scheme

Our core values underpin everything we do as a business. We always put our customers first, are passionate and dedicated and strive for excellence. To achieve this, we are positive and collaborative and keep it simple.

If you have what it takes to be part of our future success, we want to hear from you.

Please note - for the successful candidate, any employment is conditional on you having the right to work in the UK in the role in which you are employed.

Type of employment: Permanent, full-time

See more jobs at END

Apply for this job

18d

Data Engineer

Clover HealthRemote - Canada
MLremote-firsttableauairflowpostgressqlDesignqac++pythonAWS

Clover Health is hiring a Remote Data Engineer

At Clover, the Business Enablement team spearheads our technological advancement while ensuring robust security and compliance. We deliver user-friendly corporate applications, manage complex data ecosystems, and provide efficient tech solutions across the organization. Our goal is simple, we make it easy for the business to do what’s right for Clover. 

We are looking for a Data Engineer to join our team. You'll work on the development of data pipelines and tools to support our analytics and machine learning development. Applying insights through data is a core part of our thesis as a company — and you will work on a team that is a central part of helping to deliver that promise through making a wide variety of data easily accessible for internal and external consumers. We work primarily in SQL, Python and our data is stored primarily in Snowflake. You will work with data analysts, other engineers, and healthcare professionals in a unique environment building tools to improve the health of real people. You should have extensive experience leading data warehousing projects with advanced knowledge in data cleansing, ingestion, ETL and data governance.

As a Data Engineer, you will:

  • Collaborate closely with operations, IT and vendor partners to understand the data landscape and contribute to the vision, development and implementation of the Data Warehouse solution.
  • Recommend technologies and tools to support the future state architecture.
  • Develop standards, processes and procedures that align with best practices in data governance and data management.
  • Be responsible for logical and physical data modeling, load and query performance.
  • Develop new secure data feeds with external parties as well as internal applications.
  • Perform regular analysis and QA, diagnose ETL and database related issues, perform root cause analysis, and recommend corrective actions to management.
  • Work with cross-functional teams to support the design, development, implementation, monitoring, and maintenance of new ETL programs.

Success in this role looks like:

  • First 90 days:
    • Develop a strong understanding of our existing data ecosystem and data pipelines.
    • Build relationships with various stakeholder departments to understand their day to day operation and their usage and need of Data Eng products.
    • Contribute in the design and implementation of new ETL programs to support the growth and operation efficiency of Clover.
    • Perform root cause analysis after issues have been identified and propose solutions for both short term and long term fixture to increase the stability and accuracy of our pipelines.
  • First 6 months:
    • Provide feedback and propose opportunities for improvement on current data engineering processes and procedures.
    • Work with platform engineers on improving data ecosystem stability, data quality monitoring and data governance.
    • Lead discussion with key stakeholders, propose, design and implement new data eng projects that solve critical business problems.
  • How will success be measured in the future?
    • Continue the creation and management of ETL program and data assets.
    • Be the technical Data Eng lead of our data squad’s day to day operation.
    • Guide and mentor other junior members of the team.

You should get in touch if:

  • You have a Bachelor’s degree in Computer Science or related field along with 5+ years of experience in ETL programming.
  • You have professional experience working in a healthcare setting. Health Plan knowledge highly desired, Medicare preferred.  
  • You have expertise in most of these technologies: 
    • Python 
    • Snowflake 
    • DBT
    • Airflow 
    • GCP
    • AWS
    • BigQuery
    • Postgres 
    • Data Governance 
    • Some experience with analytics, data science, ML collaboration tools such as Tableau, Mode, Looker

#LI-Remote

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.We are an E-Verify company.


Benefits Overview:

  • Financial Well-Being: Our commitment to attracting and retaining top talent begins with a competitive base salary and equity opportunities. Additionally, we offer a performance-based bonus program and regular compensation reviews to recognize and reward exceptional contributions.
  • Physical Well-Being: We prioritize the health and well-being of our employees and their families by offering comprehensive group medical coverage that include coverage for hospitalization, outpatient care, optical services, and dental benefits.
  • Mental Well-Being: We understand the importance of mental health in fostering productivity and maintaining work-life balance. To support this, we offer initiatives such as No-Meeting Fridays, company holidays, access to mental health resources, and a generous annual leave policy. Additionally, we embrace a remote-first culture that supports collaboration and flexibility, allowing our team members to thrive from any location. 
  • Professional Development: We are committed to developing our talent professionally. We offer learning programs, mentorship, professional development funding, and regular performance feedback and reviews.

Additional Perks:

  • Reimbursement for office setup expenses
  • Monthly cell phone & internet stipend
  • Flexibility to work from home, enabling collaboration with global teams
  • Paid parental leave for all new parents
  • And much more!

About Clover:We are reinventing health insurance by combining the power of data with human empathy to keep our members healthier. We believe the healthcare system is broken, so we've created custom software and analytics to empower our clinical staff to intervene and provide personalized care to the people who need it most.

We always put our members first, and our success as a team is measured by the quality of life of the people we serve. Those who work at Clover are passionate and mission-driven individuals with diverse areas of expertise, working together to solve the most complicated problem in the world: healthcare.

From Clover’s inception, Diversity & Inclusion have always been key to our success. We are an Equal Opportunity Employer and our employees are people with different strengths, experiences and backgrounds, who share a passion for improving people's lives. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion and many other parts of one’s identity. All of our employee’s points of view are key to our success, and inclusion is everyone's responsibility.


See more jobs at Clover Health

Apply for this job

20d

Lead Data Engineer

Lumos IdentityRemote
OpenAIsqlDesignmongodbMySQL

Lumos Identity is hiring a Remote Lead Data Engineer

Imagine having an enterprise-grade AppStore at work — one that ensures you can easily search, request, and gain access to any app you need, precisely when you need it. No more long waiting times with outstanding IT requests. Lumos is solving the app and access management challenges for organizations of all sizes through a unified platform. Our fast-growing startup is pioneering the way to untangle the complex web of app and access management by building the critical infrastructure that defines relationships between app, identities and data.
 
Why Lumos?
  • Jump on a Rocketship: Since launching out of stealth mode just over 2 years ago, our team has grown from 20 to ~100 people and our customer base has 10x’ed with companies like GitHub, MongoDB and Major League Baseball!
  • Build with Renowned Investor Backing:Andreessen Horowitz (a16z) backed us since the beginning and we've raised over $65m from Scale, Neo, Greg Brockman (President at OpenAI), Phil Venables (CISO at Google), and others.
  • Thrive in a Unique Culture:You’ll join an early-stage company where you have actual influence on the trajectory of the company. We deeply care about our people and the philosophy we live by - check out our values here.

Lumos is making it a joy for companies to manage their apps and identities ✨. By integrating usage, spend, compliance, and access data, we provide a level of clarity and insight previously unimaginable. To deliver a best-in-class product, Lumos depends on state-of-the-art data pipelines that power our analytics and AI-driven solutions.

We are seeking a Lead Data Engineer to expand and enhance our existing data infrastructure, built around MySQL, Fivetran, Airbyte, and Snowflake. In this role, you will design and implement production-ready data pipelines with a strong emphasis on reliability, testing, and scalability. Your work will ensure that our AI products and in-product analytics perform flawlessly at scale, driving value for our customers.

✨ Your Responsibilities

  • Your mission is to architect, build, and maintain cutting-edge data pipelines that empower our AI products, in-product analytics, and internal reporting.
  • You will ensure the scalability, reliability, and quality of our analytics data infrastructure, enabling the seamless integration of usage, spend, compliance, and access data to drive business insights and deliver exceptional value to our customers.
  • By focusing on testing, automation, and best-in-class engineering practices, you will play a pivotal role in transforming complex data into actionable intelligence, fueling Lumos' growth and innovation.

???? What We Value

  • Extensive experience designing and implementing medallion architectures (bronze, silver, gold layers) or similar data warehouse paradigms. Skilled in optimizing data pipelines for both batch and real-time processing.
  • Proficiency in deploying data pipelines using CI/CD tools and integrating automated data quality checks, version control, and deployment automation to ensure reliable and repeatable data processes.
  • Expertise in advanced SQL, ETL processes, and data transformation techniques. Strong programming skills in Python.
  • Demonstrated ability to work closely with AI engineers, data scientists, product engineers, product managers, and other stakeholders to ensure that data pipelines meet the needs of all teams.

???? Pay Range

  • $190,000 - $245,000. Note that this range is a good faith estimate of likely pay for this role; upon hire, the pay may differ due to skill and/or level of experience.

 

???? Benefits and Perks:

  • ???? Remote work culture (+/-4 hours Pacific Time)
  • ⛑ Medical, Vision, & Dental coverage covered by Lumos
  • ???? Company and team bonding trips throughout the year fully covered by Lumos
  • ???? Optimal WFH setup to set you up for success
  • ???? Unlimited PTO, with minimum time off to make sure you are rested and able to be at your best
  • ???????? Up to (4) months off for both the Birthing & Non-birthing parent
  • ???? Wellness stipend to keep you awesome and healthy
  • ???? 401k matching plan 

Apply for this job

21d

Senior Data Engineer

Live PersonBulgaria (Remote)
sqlDesign

Live Person is hiring a Remote Senior Data Engineer

LivePerson (NASDAQ: LPSN) is the global leader in enterprise conversations. Hundreds of the world’s leading brands — including HSBC, Chipotle, and Virgin Media — use our award-winning Conversational Cloud platform to connect with millions of consumers. We power nearly a billion conversational interactions every month, providing a uniquely rich data set and safety tools to unlock the power of Conversational AI for better customer experiences.  

At LivePerson, we foster an inclusive workplace culture that encourages meaningful connection, collaboration, and innovation. Everyone is invited to ask questions, actively seek new ways to achieve success and reach their full potential. We are continually looking for ways to improve our products and make things better. This means spotting opportunities, solving ambiguities, and seeking effective solutions to the problems our customers care about. 

 Overview:

We are seeking a highly skilled Database Administrator / Data Engineer to join our team. As a Data Engineer, you will play a key role in designing, building, and maintaining the infrastructure and systems necessary for the acquisition, storage, and processing of large volumes of data. Your primary focus will be on developing robust data pipelines, data warehouses, and ETL (Extract, Transform, Load) processes to support the organization's data-driven initiatives. You will collaborate with data scientists, analysts, and other stakeholders to understand data requirements and implement scalable solutions that enable efficient data analysis and reporting.

You will: 

  • Ensure the security and integrity of data stored in databases by implementing access controls, encryption, and other security measures.
  • Monitor database performance and proactively identify and address performance issues.
  • Troubleshoot and resolve database-related issues, such as performance bottlenecks, data corruption, and connectivity problems.
  • Perform database upgrades, patches, and migrations as needed.
  • Data Governance Implementation: Lead the development and implementation of data governance frameworks, policies, and procedures in the organization to ensure data quality, consistency, and integrity across all reporting systems.
  • Dashboard and Reporting Enhancement: Collaborate with business stakeholders to understand reporting requirements, design, develop, and maintain Power BI dashboards and reports that effectively visualize key performance indicators and business metrics.
  • Data Modeling and Architecture: Utilize advanced data modeling techniques (DBT preferred) to integrate complex and disparate data sources into cohesive data models, ensuring accuracy, efficiency, and scalability.
  • Best Practices Advocacy: Champion best practices for data visualization, reporting design, and data analysis methodologies, ensuring adherence to industry standards and organizational guidelines.
  • Technical Expertise: Serve as a subject matter expert in Power BI reporting (preferable).
  • Collaborative Partnerships: Collaborate cross-functionally with data engineers, data scientists and other stakeholders to streamline data integration processes, resolve data quality issues, and drive continuous improvement in reporting capabilities.

You have:

  • Demonstrated ability to independently uncover insights and relationships across numerous datasets
  • 5+ years of experience analyzing data and creating dashboards and reports (PowerBI - Preferred)
  • 5+ years of experience interpreting and writing advanced SQL (Snowflake preferable). 
  • Familiar with Hadoop (we use Vertica and Impala)
  • Experience with database design, normalization, and optimization.
  • Ability to work closely with teammates in a highly collaborative environment and simultaneously be a self-starter with strong individual contributions
  • Excellent communication and presentation skills

Benefits: 

  • Health: medical, dental, and vision
  • Time away: vacation and holidays
  • Development: Generous tuition reimbursement and access to internal professional development resources.
  • Equal opportunity employer

Why you’ll love working here: 

As leaders in enterprise customer conversations, we celebrate diversity, empowering our team to forge impactful conversations globally. LivePerson is a place where uniqueness is embraced, growth is constant, and everyone is empowered to create their own success. And, we're very proud to have earned recognition from Fast Company, Newsweek, and BuiltIn for being a top innovative, beloved, and remote-friendly workplace. 

Belonging at LivePerson:

We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law.

We are committed to the accessibility needs of applicants and employees. We provide reasonable accommodations to job applicants with physical or mental disabilities. Applicants with a disability who require reasonable accommodation for any part of the application or hiring process should inform their recruiting contact upon initial connection.

 

Apply for this job

23d

Senior Data Engineer

OuraHelsinki,Uusimaa,Finland, Remote Hybrid
MLS3agilesqlgitdockertypescriptpythonAWS

Oura is hiring a Remote Senior Data Engineer

Our mission at Oura is to empower every person to own their inner potential. Our award-winning products help our global community gain a deeper knowledge of their readiness, activity, and sleep quality by using their Oura Ring and its connected app. We've helped 2.5 million people understand and improve their health by providing daily insights and practical steps to inspire healthy lifestyles.

We are looking for a Senior Software Engineer to join our Data & ML Platform team. 
You’ll join a platform team focused on two major internal systems that support many internal Oura users and several production features:

  • Oura’s Datalake
  • Cloud MLOps systems

Concretely, you will be:

  • Building, operating and improving systems to move, process and store large amounts of data (Terabyte-Petabyte scale) leveraging tools such as: AWS Kinesis, S3, Spark / Glue, Athena, dbt, iceberg, snowflake, docker, workflow engines, and more.
  • Building components that support the handling of datasets, training, testing and release of new on-device and cloud-based ML models.
  • Independently collaborating with different stakeholders including Data Scientists, Testing, Firmware, Hardware to define and implement improvements and new functionality.
  • Supporting internal datalake consumers in their day-to-day work.
  • Writing python code (mostly) and some typescript code.

We hope that following can be said about you:

  • 4+ years of experience developing and operating production systems.
  • Experience running, monitoring and debugging production systems at scale on a public cloud. We rely on AWS but experience with other cloud platforms counts too.
  • Experience with programming languages such as Python and Typescript. Willingness to code in Python is required since a large part of our codebase is Python.
  • Good architectural understanding of event driven architectures, workflow engines, database and datawarehouse systems.
  • Enjoy writing maintainable and well-tested code
  • Follow common practices: version control (git), issue tracking, unit testing and agile development processes
  • Generalist and pragmatic approach to development. Knowledge of various programming languages is a plus.
  • Ability to build infrastructure and components following best practices such as CI/CD and infrastructure as code.
  • Broad knowledge of software fundamentals, databases, warehouses and system design.
  • You can write well-structured, testable and high-performance code.
  • You are familiar with some of the following:
    • Workflow engines, Stream processing, Spark, Athena, SQL, dbt
  • You are self-motivated, proactive, and bring energy to the teams and projects you work on.
  • You are driven by value creation and overall impact.

Not required but potentially relevant:

  • Knowledge of ML, particularly relevant if with PyTorch.

As a company we are focused on improving the way we live our lives. From the people who use our product to the team behind it, we work to empower every person to own their inner potential.

Location

In this role you can work remotely in Finland as well as from our easy-to-reach Helsinki or Oulu offices.

  • If working remotely, availability to occasionally travel to the office is expected (for example for workshops and team gatherings)

Benefits

  • Competitive Salary
  • Lunch benefit
  • Wellness benefit
  • Flexible working hours
  • Collaborative, smart teammates
  • An Oura ring of your own
  • Personal learning & development program
  • Wellness Time Off

If this sounds like the next step for you, please send us your application as soon as possible, but by November 17th the latest.

Oura is proud to be an equal opportunity workplace. We celebrate diversity and are committed to creating an inclusive environment for all employees. Individuals seeking employment at Oura are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, socioeconomic status, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws. We will not tolerate discrimination or harassment based on any of these characteristics.

See more jobs at Oura

Apply for this job

TRUCKING PEOPLE is hiring a Remote Junior AI Data Engineer (Remote)

Junior AI Data Engineer (Remote) - TRUCKING PEOPLE - Career Page /* Basic CMS Settings */ .jobs-navbar, .jobboard .modal-custom .modal-header {background-color: #0068CC;} .page-header .b

See more jobs at TRUCKING PEOPLE

Apply for this job

26d

Senior Data Engineer, Streaming

TubiSan Francisco, CA; Remote
MLS3scalasqlapic++AWS

Tubi is hiring a Remote Senior Data Engineer, Streaming

Tubi is a global entertainment company and the most watched free TV and movie streaming service in the U.S. and Canada. Dedicated to providing all people access to all the world’s stories, Tubi offers the largest collection of on-demand content, including over 250,000 premium movies and TV episodes and over 300 exclusive originals. With a passionate fanbase and over 80 million monthly active viewers, the company is committed to putting viewers first with free, accessible entertainment for all.

About the Role:

With such a large catalog of content, data and machine learning are fundamental to Tubi's success, and Tubi's Data Platform is the cornerstone of all data and ML use. At Tubi, you will join a stellar team of engineers with a passion for solving the most challenging problems using cutting-edge technology. In this Lead Data Engineering role, you will be expected to be a hands-on leader, leading by example as you and your team build out real-time systems to handle data at massive scale. You will enable machine learning engineers to iterate and experiment faster than ever before. You will help data scientists take ideas to production in days or weeks, not months or years. And you will build tools to enable data analysis and modeling for even the least tech-savvy colleagues. In short, you will enable Tubi to be truly data-driven.

Responsibilities: 

  • Handle the collection and processing of large scale raw data.
  • Building low latency and low maintenance data infrastructure that powers the whole company
  • Develop and enhance our analytics pipeline by creating new tools and solutions to facilitate intuitive data consumption for stakeholders
  • Constructs and improves infrastructure for data extraction, transformation, loading and cleaning from diverse sources using API’s, SQL and AWS technologies.
  • Improving data quality by building tools, processes and pipelines to enforce, check and manage data quality at a large scale.
  • Implement CI/CD pipelines for data operations ensuring efficient and smooth deployment of data models and applications.
  • Address ad hoc data requests and core pipeline tasks

Your Background:

  • 5+ years of experience building scalable batch and streaming data pipelines (Spark or Flink)
  • 3+ years of experience designing and implementing pipelines for ETL and cleaning of data from a wide variety of sources using API’s, SQL, Spark and AWS technologies.
  • Greenfield data warehouse modeling experience
  • Strong knowledge of Streaming, distributed databases, and cloud storage (e.g., S3).
  • Strong experience in JVM language (Scala is not required, but preferred)
  • Prior experience with Kafka, Kinesis, or equivalent.

Pursuant to state and local pay disclosure requirements, the pay range for this role, with final offer amount dependent on education, skills, experience, and location is listed annually below. This role is also eligible for an annual discretionary bonus, long-term incentive plan, and various benefits including medical/dental/vision, insurance, a 401(k) plan, paid time off, and other benefits in accordance with applicable plan documents.

California, New York City, Westchester County, NY, and Seattle, WA Compensation

$164,000 to $234,000 / year + Bonus + Long-Term Incentive Plan + Benefits

Colorado and Washington (excluding Seattle, WA) Compensation

$147,000 to $210,000 / year + Bonus + Long-Term Incentive Plan + Benefits

#LI-MQ1


Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

  • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
  • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
  • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
  • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

See more jobs at Tubi

Apply for this job

26d

Data Engineer - AWS

Tiger AnalyticsHartford,Connecticut,United States, Remote
S3LambdaairflowsqlDesignAWS

Tiger Analytics is hiring a Remote Data Engineer - AWS

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake.

Key Responsibilities:

  • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
  • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Hands-on experience in designing and building data pipelines
  • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Strong experience with Databricks, Pyspark for data processing and analytics.
  • Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Strong problem-solving skills and attention to detail.

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

28d

Data Engineer (F/H)

ASINantes, France, Remote
S3agilenosqlairflowsqlazureapijavac++

ASI is hiring a Remote Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Simon, Responsable de l’équipe Data Nantaise, est à la recherche d’un Data Engineer pour mettre en place, intégrer, développer et optimiser des solutions de pipeline sur des environnements Cloud et On Premise pour nos projets clients.

Au sein d’une équipe dédiée et principalement en contexte agile : 

  • Vous participez à la rédaction de spécifications techniques et fonctionnelles
  • Vous maitrisez les formats de données structurés et non structurés et savez les manipuler
  • Vous connectez une solution ETL / ELT à une source de données
  • Vous concevez et réalisez un pipeline de transformation et de valorisation des données, et ordonnancez son fonctionnement
  • Vous prenez en charge les développements de médiations 
  • Vous veillez à la sécurisation des pipelines de données
  • Vous concevez et réalisez des API utilisant les données valorisées
  • Vous concevez et implémentez des solutions BI
  • Vous participez à la rédaction des spécifications fonctionnelles et techniques des flux
  • Vous définissez des plans de tests et d’intégration
  • Vous prenez en charge la maintenance évolutive et corrective
  • Vous traitez les problématiques de qualité de données

En fonction de vos compétences et appétences, vous intervenez sur l’une ou plusieurs des technologies suivantes :

  • L’écosystème data notamment Microsoft Azure
  • Les langages : SQL, Java
  • Les bases de données SQL et NoSQL
  • Stockage cloud: S3, Azure Blob Storage…
  • Les ETL/ESB et autres outils : Talend, Spark, Kafka NIFI, Matillion, Airflow, Datafactory, Glue...

 

En rejoignant ASI,

  • Vous évoluerez au sein d’une entreprise aux modes de fonctionnement internes flexibles garantis par une politique RH attentive (accord télétravail 3J/semaine, accord congé parenthèse…) 
  • Vous pourrez participer (ou animer si le cœur vous en dit) à nos nombreux rituels, nos événements internes (midi geek, dej’tech) et externes (DevFest, Camping des Speakers…)  
  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 

Qualifications

Vous êtes issu d’une formation supérieure en informatique, mathématiques ou spécialisé en Big Data, et avez une expérience minimale de 3 ans en ingénierie des données et d'une expérience opérationnelle réussie dans la construction de pipelines de données structurées et non structurées.

  • Attaché à la qualité de ce que vous réalisez, vous faites preuve de rigueur et d'organisation dans la réalisation de vos activités.
  • Doté d'une bonne culture technologique, vous faites régulièrement de la veille pour actualiser vos connaissances.
  • Un bon niveau d’anglais, tant à l’écrit qu’à l’oral est recommandé.

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

Le salaire proposé pour ce poste est compris entre 36 000 et 40 000 €, selon l'expérience et les compétences, tout en respectant l'équité salariale au sein de l'équipe. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

28d

Stage Data Engineer (F/H)

ASINantes, France, Remote
S3agilescalanosqlairflowmongodbazurescrumjavapython

ASI is hiring a Remote Stage Data Engineer (F/H)

Description du poste

Dans un souci d’accessibilité et de clarté, les termes employés au masculin se réfèrent aussi bien au genre féminin que masculin.  

Afin de répondre aux enjeux de nos clients, et dans la continuité du développement de nos expertises Data, nous sommes à la recherche d’un stagiaire Data Engineer.

Intégré à l’équipe Data Nantaise, vous rejoignez un projet, sous la responsabilité d’un expert, et au quotidien :

  • Vous avez un tuteur dédié pour suivre votre évolution
  • Vous participez au développement d’une chaîne de traitement de l’information de bout en bout
  • Vous intervenez sur de l’analyse descriptive/inférentielle ou prédictive
  • Vous participez aux spécifications techniques
  • Vous appréhendez les méthodologies Agile Scrum et cycle en W
  • Vous montez en compétences dans l’un ou plusieurs des environnements technologiques suivants :
    • L’écosystème Data: Spark, Hive, Kafka, Hadoop…
    • Les langages : Scala, Java, Python…
    • Les bases de données NoSQL : MongoDB, Cassandra…
    • Le stockage cloud: S3, Azure…
    • Les ETL/Outils d'orchestration du marché : Airflow, Datafactory, Talend...

 

En rejoignant ASI,

  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…) 
  • Vous intégrerez les différentes communautés expertes d'ASI, pour partager des bonnes pratiques et participer aux actions d'amélioration continue. 

Qualifications

De formation supérieure en informatique, mathématiques ou spécialisée en Big Data (de type école d’ingénieurs ou université) en cours de validation (Bac+5), vous êtes à la recherche d’un stage de fin d’études d’une durée de 4 à 6 mois.

  • Le respect et l’engagement font partie intégrante de vos valeurs.
  • Passionné par la donnée, vous êtes rigoureux et vos qualités relationnelles vous permettent de vous intégrer facilement dans l’équipe.

Le stage devant déboucher sur une proposition d'emploi concrète en CDI.

 

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement. 

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap. 

See more jobs at ASI

Apply for this job

28d

Sr. Data Engineer (Databricks)

FluentToronto,Ontario,Canada, Remote
ui

Fluent is hiring a Remote Sr. Data Engineer (Databricks)

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition.  Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. 

As a Data Engineer you will bring your Databricks pipeline expertise to execute on building data products to power Fluent’s business lines.  These data products will be the foundation for sophisticated data representation of customer journeys and marketplace activity. 

You are known as a strong and efficient IC Data Engineer, with the ability to assist the Data Architect vetting the translation of an Enterprise Data Model to physical data models and pipelines.  You are familiar with Databricks medallion architecture, and how to work backwards from your enterprise models of Gold Data Products.  You are considered an expert in Spark. 

You will work with your counterparts to build and operate high-impact data solutions.  This role is fully Remote in the United States or Canada, with occasional travel to NYC. 

Fluent is looking for an experienced Data Engineer, who thrives in writing robust code in the Databricks ecosystem.   

What You’ll Do  

  • Majority of the role will be software engineering – Tables, Views, Spark jobs, orchestration within Databricks environment, following an enterprise data model design.  You will help elevate standards across testing, code repository, naming conventions, etc.
  • Develop, deploy, and manage scalable pipelines on Databricks, ensuring robust integration with a Feature Store leveraging online tables for machine learning models. 
  • Investigate and leverage Databricks’ capabilities to implement real-time data processing and streaming, potentially using Spark Streaming, Online Tables, Delta Live Tables. 
  • Contribute and maintain the high quality of the code base with comprehensive data observability, metadata standards, and best practices. 
  • Partner with data science, UI and reporting teams to understand data requirements and translate them into models. 
  • Keep track of emerging tech and trends within the Databricks ecosystem 
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices. 
  • Empower internal teams by providing communication on architecture, target gold tables, execution plans, releases and training. 
  • Bachelors or Masters degree in computer science 
  • 5+ years of industry experience in Data Engineering, including expertise in Spark and SQL. 
  • 2+ years of experience with Databricks environment  
  • Nice to have: Familiarity with real-time ML systems within Databricks will be very beneficial 

At Fluent, we like what we do and we like who we do it with. Our team is a tight-knit crew of go-getters; we love to celebrate our successes! In addition we offer a fully stocked kitchen, catered breakfast and lunch, and our office manager keeps the calendar stocked with activity-filled events. When we’re not eating, working out, or planning parties, Fluent folks can be found participating in recreational sports leagues, networking with She Runs It, and bonding across teams during quarterly outings to baseball games, fancy dinners, and pizza-making classes. And we have all the practical benefits, too…

  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • The amazing opportunity to work for a high-flying performance marketing company!

Salary Range: $160,000 to $180,000 - The base salary range represents the low and high end of the Fluent salary range for this position. Actual salaries will vary depending on factors including but not limited to location, experience, and performance.

Fluent participates in the E-Verify Program. As a participating employer, Fluent, LLC will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization. Fluent, LLC follows all federal regulations including those set forth by The Office of Special Counsel for Immigration-Related Unfair Employment Practices (OSC). The OSC enforces the anti-discrimination provision (§ 274B) of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b.

See more jobs at Fluent

Apply for this job

28d

Data Engineer (Databricks)

FluentToronto,Ontario,Canada, Remote
ui

Fluent is hiring a Remote Data Engineer (Databricks)

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition.  Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. 

As a Data Engineer you will bring your Databricks pipeline expertise to execute on building data products to power Fluent’s business lines.  These data products will be the foundation for sophisticated data representation of customer journeys and marketplace activity. 

You are known as a strong and efficient IC Data Engineer, with the ability to assist the Data Architect vetting the translation of an Enterprise Data Model to physical data models and pipelines.  You are familiar with Databricks medallion architecture, and how to work backwards from your enterprise models of Gold Data Products.  You are considered an expert in Spark. 

You will work with your counterparts to build and operate high-impact data solutions.  This role is fully Remote in the United States or Canada, with occasional travel to NYC. 

Fluent is looking for an experienced Data Engineer, who thrives in writing robust code in the Databricks ecosystem.   

What You’ll Do  

  • Majority of the role will be software engineering – Tables, Views, Spark jobs, orchestration within Databricks environment, following an enterprise data model design.  You will help elevate standards across testing, code repository, naming conventions, etc.
  • Develop, deploy, and manage scalable pipelines on Databricks, ensuring robust integration with a Feature Store leveraging online tables for machine learning models. 
  • Investigate and leverage Databricks’ capabilities to implement real-time data processing and streaming, potentially using Spark Streaming, Online Tables, Delta Live Tables. 
  • Contribute and maintain the high quality of the code base with comprehensive data observability, metadata standards, and best practices. 
  • Partner with data science, UI and reporting teams to understand data requirements and translate them into models. 
  • Keep track of emerging tech and trends within the Databricks ecosystem 
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices. 
  • Empower internal teams by providing communication on architecture, target gold tables, execution plans, releases and training. 
  • Bachelors or Masters degree in computer science 
  • 3+ years of industry experience in Data Engineering, including expertise in Spark and SQL. 
  • 1+ years of experience with Databricks environment  
  • Nice to have: Familiarity with real-time ML systems within Databricks will be very beneficial 

At Fluent, we like what we do and we like who we do it with. Our team is a tight-knit crew of go-getters; we love to celebrate our successes! In addition we offer a fully stocked kitchen, catered breakfast and lunch, and our office manager keeps the calendar stocked with activity-filled events. When we’re not eating, working out, or planning parties, Fluent folks can be found participating in recreational sports leagues, networking with She Runs It, and bonding across teams during quarterly outings to baseball games, fancy dinners, and pizza-making classes. And we have all the practical benefits, too…

  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • The amazing opportunity to work for a high-flying performance marketing company!

Salary Range: $130,000 to $160,000 - The base salary range represents the low and high end of the Fluent salary range for this position. Actual salaries will vary depending on factors including but not limited to location, experience, and performance.

Fluent participates in the E-Verify Program. As a participating employer, Fluent, LLC will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization. Fluent, LLC follows all federal regulations including those set forth by The Office of Special Counsel for Immigration-Related Unfair Employment Practices (OSC). The OSC enforces the anti-discrimination provision (§ 274B) of the Immigration and Nationality Act (INA), 8 U.S.C. § 1324b.

See more jobs at Fluent

Apply for this job

29d

Senior Data Engineer

M3USALondon, United Kingdom, Remote
agilesqloracleDesignazurepostgresqlpythonAWS

M3USA is hiring a Remote Senior Data Engineer

Job Description

Essential Duties and Responsibilities: 

Including, but are not limited to:  

  • Design, develop, and maintain high-quality, secure data pipelines and processes to manage and transform data efficiently. 

  • Lead the architecture and implementation of data models, schemas, and integrations that support business intelligence and reporting needs. 

  • Collaborate with cross-functional teams to understand data requirements and deliver optimal data solutions that align with business goals. 

  • Maintain and enhance data infrastructure, including data warehouses, lakes, and integration tools. 

  • Provide guidance on best practices for data management, security, and compliance. 

  • Support Power BI and other visualization tools, ensuring consistent and reliable access to data insights. 

  • Oversee the delivery of data initiatives, ensuring they meet project milestones, KPIs, and deadlines. 

Qualifications

Education and Training Required: 

  • Bachelor’s degree in Computer Science, Data Science, or a related field, or equivalent experience.  

Minimum Experience: 

  • 5+ years of experience in data engineering or related fields.  

  • 2+ years of experience with Power BI or similar data visualization tools.  

Knowledge and Skills: 

  • Proficiency with data engineering tools and technologies (e.g., SQL, Python, ETL tools).  

  • Strong experience with Power BI for data visualization and reporting.  

  • Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud).  

  • Experience with data modelling, data warehousing, and designing scalable data architectures.  

  • Strong knowledge of database systems (e.g., SQL Server, Oracle, PostgreSQL).  

  • Experience working in an Agile development environment.  

  • Excellent communication skills to work effectively with both technical and non-technical stakeholders.  

  • Ability to multi-task and manage multiple projects simultaneously.  

  • Problem-solving mindset with a desire to continuously improve data processes.  

See more jobs at M3USA

Apply for this job

29d

Senior Data Engineer (Remote)

M3USAFort Washington, PA, Remote
agilesqloracleDesignazurepostgresqlpythonAWS

M3USA is hiring a Remote Senior Data Engineer (Remote)

Job Description

M3 Global Research, an M3 company, is seeking a Senior Data Engineer to join our data engineering team. This role will focus on building and maintaining robust data pipelines, working closely with stakeholders to ensure data solutions align with business objectives, and utilizing tools like Power BI for data visualization and reporting. The ideal candidate has strong analytical skills, a passion for data-driven decision-making, and excellent communication abilities to work effectively with stakeholders across the organization.

Essential Duties and Responsibilities:

Include, but are not limited to:

  • Design, develop, and maintain high-quality, secure data pipelines and processes to manage and transform data efficiently.
  • Lead the architecture and implementation of data models, schemas, and integrations that support business intelligence and reporting needs.
  • Collaborate with cross-functional teams to understand data requirements and deliver optimal data solutions that align with business goals.
  • Maintain and enhance data infrastructure, including data warehouses, lakes, and integration tools.
  • Provide guidance on best practices for data management, security, and compliance.
  • Support Power BI and other visualization tools, ensuring consistent and reliable access to data insights.
  • Oversee the delivery of data initiatives, ensuring they meet project milestones, KPIs, and deadlines.

Qualifications

  • 5+ years of experience in data engineering or related fields.
  • 2+ years of experience with Power BI or similar data visualization tools.
  • Proficiency with data engineering tools and technologies (e.g., SQL, Python, ETL tools).
  • Strong experience with Power BI for data visualization and reporting.
  • Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud).
  • Experience with data modeling, data warehousing, and designing scalable data architectures.
  • Strong knowledge of database systems (e.g., SQL Server, Oracle, PostgreSQL).
  • Experience working in an Agile development environment.
  • Excellent communication skills to work effectively with both technical and non-technical stakeholders.
  • Ability to multi-task and manage multiple projects simultaneously.
  • Problem-solving mindset with a desire to continuously improve data processes.  

See more jobs at M3USA

Apply for this job

30d

Sr. Data Engineer

DevOPSterraformairflowpostgressqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipeline

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with Databricks platform
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

 

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions, including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range is
$160,000$185,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

Libertex Group is hiring a Remote Data Engineer

Established in 1997, the Libertex Group has helped shape the online trading industry by merging innovative technology, market movements and digital trends. 

The multi-awarded online trading platform, Libertex, enables traders to access the market and invest in stocks or trade CFDs with underlying assets being commodities, Forex, ETFs, cryptocurrencies, and others.

Libertex is, also, the Official Online Trading Partner of FC Bayern, bringing the exciting worlds of football and trading together.

We build innovative fintech so people can #TradeForMore with Libertex.

Job Overview

We are responsible for designing and implementing ETL processes using modern dbt technology, managing DWH, data marts, and dashboards, as well as modeling, transforming, testing, and deploying data.

  • Strong SQL Skills (T-SQL preferred) - Expertise in writing complex queries, optimizing database performance, and ensuring data integrity. Ability to design and develop data models, ETL/ELT pipelines, and transformations.
  • Experience with MSSQL Server - Hands-on experience in database design, query optimization, and performance tuning on MSSQL.
  • Familiarity with dbt (Data Build Tool) - Experience in developing, managing, and optimizing data models and transformations in dbt. Ability to design and implement robust data pipelines using dbt, ensuring data accuracy and reliability.
  • Proficiency in Python - Ability to write clean, efficient, and reusable Python scripts for automating data processes. Experience in writing Python code to handle ETL tasks, data manipulation, and API integrations.

Nice to have:

  • Experience with Apache Airflow will be big plus
  • Experience with Docker will be plus
  • Experience with GitLab CI/CD will be plus
  • Strong communication skills to collaborate with data engineers, analysts, and business stakeholders.
  • Proactive problem-solving attitude and a continuous improvement mindset.
  • Excel (MS Office) - advanced level
  • Intermediate (B1) and higher level of English

Responsibilities:

  • Interaction with all team members and participation in development at all stages.
  • Building data integration; Development and transforming data via dbt-models.
  • Creating auto-tests, documentation of models and tests.
  • Create data pipelines on regular base.
  • Data warehouse performance optimizations; Applying best data engineering practices.

  • Work in a pleasant and enjoyable environment near the Montenegrin sea or mountains
  • Quarterly bonuses based on Company performance
  • Generous relocation package for the employee and their immediate family/partner 
  • Medical Insurance Plan with coverage for the employee and their immediate family from day one
  • 24 working days of annual leave 
  • Yearly reimbursement of travel expenses for the employee and family's flight home
  • Corporate events and team building activities
  • Udemy Business unlimited membership & language training courses 
  • Professional and personal development opportunities in a fast-growing environment 

See more jobs at Libertex Group

Apply for this job

+30d

Senior Data Engineering

NielsenIQIllinois, IL, Remote
DevOPSagileDesignazurejenkinspythonAWS

NielsenIQ is hiring a Remote Senior Data Engineering

Job Description

Position Description 

  • Meet with stakeholders to understand the big picture and asks. 
  • Recommend architecture aligned with the goals and objectives of the product/organization. 
  • Recommend standard ETL design patterns and best practice. 
  • Drive the detail design and architectural discussions as well as customer requirements sessions to support the implementation of code and procedures for our big data product.
  • Design and develop proof of concept/prototype to demonstrate architecture feasibility. 
  • Collaborate with developers on the team to meet product deliverables. 
  • Must have familiarity with data science tech stack. Any one of the  languages :SAS,SPSS code or R-code. 
  • Work independently and collaboratively on a multi-disciplined project team in an Agile development environment. 
  • Ability to identify and solve for code/design optimization. 
  • Learn and integrate with a variety of systems, APIs, and platforms. 
  • Interact with a multi-disciplined team to clarify, analyze, and assess requirements. 
  • Be actively involved in the design, development, and testing activities in big data applications. 

Qualifications

  • Hands-on experience Python and Pyspark, Jupyter Notebooks, Python. 
  • Familiarity with Databricks. Azure Databricks is a plus. 
  • Familiarity with data cleansing, transformation, and validation. 
  • Proven architecture skills on Big Data  projects. 
  • Hands-on experience with a code versioning tool such as GitHub, Bitbucket, etc. 
  • Hands-on experience building pipelines in GitHub (or Azure Devops,Github, Jenkins, etc.) 
  • Hands-on experience with Spark. 
  • Strong written and verbal communication skills. 
  • Self-motivated and ability to work well in a team. 

Any mix of the following skills is also valuable: 

  • Experience with data visualization tools such as Power BI or Tableau. 
  • Experience with DEVOPS CI/CD tools and automation processes (e.g., Azure DevOPS, GitHub, BitBucket). 
  • Experience with Azure Cloud Services and Azure Data Factory. 
  • Azure or AWS Cloud certification preferred. 

Education:

  • Bachelor of Science degree from an accredited university 

See more jobs at NielsenIQ

Apply for this job

+30d

Senior Data Engineer

MozillaRemote
sqlDesignc++python

Mozilla is hiring a Remote Senior Data Engineer

To learn the Hiring Ranges for this position, please select your location from the Apply Now dropdown menu.

To learn more about our Hiring Range System, please click this link.

Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than 225 million people around the world using our products each month, we’re shaping the next 25 years of technology and helping to reclaim an internet built for people, not companies. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for people. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with thousands of volunteer contributors and collaborators all over the world, Mozillians design, build and distributeopen-sourcesoftware that enables people to enjoy the internet on their terms. 

About this team and role:

As a Senior Data Engineer at Mozilla, your primary area of focus will be on our Analytics Engineering team. This team focuses on modeling our data so that the rest of Mozilla has access to it, in the appropriate format, when they need it, to help them make data informed decisions. This team is also tasked with helping to maintain and make improvements to our data platform. Some recent improvements include introducing a data catalog, building in data quality checks among others. Check out the Data@Mozilla blog for more details on some of our work.

What you’ll do: 

  • Work with data scientists to design data modes, answer questions and guide product decisions
  • Work with other data engineers to design and maintain scalable data models and ETL pipelines
  • Help improve the infrastructure for ingesting, storing and transforming data at a scale of tens of terabytes per day
  • Help design and build systems to monitor and analyze data from Mozilla’s products
  • Establish best practices for governing data containing sensitive information, ensuring compliance and security

What you’ll bring: 

  • At a minimum 3 years of professional experience in data engineering
  • Proficiency with the programming languages used by our teams (SQL and Python)
  • Demonstrated experience designing data models used to represent specific business activities to power analysis
  • Strong software engineering fundamentals: modularity, abstraction, data structures, and algorithms
  • Ability to work collaboratively with a distributed team, leveraging strong communication skills to ensure alignment and effective teamwork across different time zones
  • Our team requires skills in a variety of domains. You should have proficiency in one or more of the areas listed below, and be interested in learning about the others:
    • You have used data to answer specific questions and guide company decisions.
    • You are opinionated about data models and how they should be implemented; you partner with others to map out a business process, profile available data, design and build flexible data models for analysis.
    • You have experience recommending / implementing new data collection to help improve the quality of data models.
    • You have experience with data infrastructure: databases, message queues, batch and stream processing
    • You have experience building modular and reusable ETL/ELT pipelines in distributed databases
    • You have experience with highly scalable distributed systems hosted on cloud providers (e.g. Google Cloud Platform)
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit

What you’ll get:

  • Generous performance-based bonus plans to all regular employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

Mozilla exists to build the Internet as a public resource accessible to all because we believe that open and free is better than closed and controlled. When you work at Mozilla, you give yourself a chance to make a difference in the lives of Web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the Web as the platform and help create more opportunity and innovation for everyone online.

Commitment to diversity, equity, inclusion, and belonging

Mozilla understands that valuing diverse creative practices and forms of knowledge are crucial to and enrich the company’s core mission.  We encourage applications from everyone, including members of all equity-seeking communities, such as (but certainly not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations,gender identities, and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws.  Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.

Group: D

#LI-DNI

Req ID: R2679

See more jobs at Mozilla

Apply for this job

+30d

Data Engineer

Blend36Edinburgh, United Kingdom, Remote
terraformsqlDesignazurepython

Blend36 is hiring a Remote Data Engineer

Job Description

Life as a Data Engineer at Blend

We are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. 

However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact.   

Our Data Engineer is someone who feels the most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated. 

Reporting to a Senior Data Engineer and working closely with the Data Science and Business Development teams, this role will be responsible for driving high delivery standards and innovation in the company. Typically, this involves delivering data solutions to support the provision of actionable insights for stakeholders. 

What can you expect from the role? 

  • Preparing and presenting data driven solutions to stakeholders.
  • Analyse and organise raw data.
  • Design, develop, deploy and maintain ingestion, transformation and storage solutions.
  • Use a variety of Data Engineering tools and methods to deliver.
  • Own projects end-to-end.
  • Contributing to solutions design and proposal submissions.
  • Supporting the development of the data engineering team within Blend.
  • Maintain in-depth knowledge of data ecosystems and trends. 
  • Mentor junior colleagues.

Qualifications

What you need to have? 

  • Proven track record of building analytical production pipelines using Python and SQL programming.
  • Working knowledge of large-scale data such as data warehouses and their best practices and principles in managing them.
  • Experience with development, test and production environments and knowledge and experience of using CI/CD.
  • ETL technical design, development and support.
  • Knowledge of Data Warehousing and database operation, management & design.

Nice to have 

  • Knowledge in container deployment.
  • Experience of creating ARM template design and production (or other IaC, e.g., CloudFormation, Terraform).
  • Experience in cloud infrastructure management.
  • Experience of Machine Learning deployment.
  • Experience in Azure tools and services such as Azure ADFv2, Azure Databricks, Storage, Azure SQL, Synapse and Azure IoT.
  • Experience of leveraging data out of SAP or S/4HANA.

See more jobs at Blend36

Apply for this job