Data Engineer Remote Jobs

90 Results

5h

Staff Data Engineer

Procore TechnologiesBangalore, India, Remote
scalaairflowsqlDesignUXjavakubernetespython

Procore Technologies is hiring a Remote Staff Data Engineer

Job Description

We’re looking for a Staff Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Staff Data Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the movement, transformation, and retrieval of data for use in analytics, machine learning, and service integration. To be successful in this role, you’re passionate about distributed systems including storage, streaming, and batch data processing technologies on the cloud, with a strong bias for action and outcomes. If you’re a seasoned data engineer comfortable and excited about building our next-generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This is a full-time position and will report to our Senior Manager of Software Engineering and will be based in the India office, but employees can choose to work remotely. We are looking for someone to join our team immediately.

What you’ll do: 

  • Participate in the design and implementation of our next-generation data platform for the construction industry
  • Define and implement operational and dimensional data models and transformation pipelines to support reporting and analytics
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Understand our current data models and infrastructure, proactively identify areas for improvement, and prescribe architectural recommendations with a focus on performance and accessibility. 
  • Work alongside our Product, UX, and IT teams, leveraging your expertise in the data space to influence our product roadmap, developing innovative solutions that add additional value to our platform
  • Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
  • Stay up to date with the latest data technology trends

What we’re looking for: 

  • Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience 
  • 8+ years of experience building and operating cloud-based, highly available, and scalable data platforms and pipelines supporting vast amounts of data for reporting and analytics
  • 2+ years of experience building data warehouses in Snowflake or Redshift
  • Hands-on experience with MPP query engines like Snowflake, Presto, Dremio, and Spark SQL
  • Expertise in relational, dimensional data modeling.
  • Understanding of data access patterns, streaming technology, data validation, performance optimization, and cost optimization
  • Strength in commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Flink, Airflow, Kubernetes, or similar
  • Strong passion for learning, always open to new technologies and ideas

Qualifications

See more jobs at Procore Technologies

Apply for this job

5h

Principal Data Engineer

Procore TechnologiesBangalore, India, Remote
scalanosqlairflowDesignazureUXjavadockerpostgresqlkubernetesjenkinspythonAWS

Procore Technologies is hiring a Remote Principal Data Engineer

Job Description

We’re looking for a Principal Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Principal Data Engineer, you’ll use your expert-level technical skills to craft innovative solutions while influencing and mentoring other senior technical leaders. To be successful in this role, you’re passionate about distributed systems, including caching, streaming, and indexing technologies on the cloud, with a strong bias for action and outcomes. If you’re an inspirational leader comfortable translating vague problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This position reports to the Senior Manager, Reporting and Analytics. This position can be based in our Bangalore, Pune, office or work remotely from a India location. We’re looking for someone to join us immediately.

What you’ll do: 

  • Design and build the next-generation data platform for the construction industry
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Contribute to setting standards and development principles across multiple teams and the larger organization
  • Stay connected with other architectural initiatives and craft a data platform architecture that supports and drives our overall platform
  • Provide technical leadership to efforts around building a robust and scalable data pipeline to support billions of events
  • Help identify and propose solutions for technical and organizational gaps in our data pipeline by running proof of concepts and experiments working with Data Platform Engineers on implementation
  • Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in the data space to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools

What we’re looking for: 

  • Bachelor’s degree in Computer Science, a similar technical field of study, or equivalent practical experience is required; MS or Ph.D. degree in Computer Science or a related field is preferred
  • 10+ years of experience building and operating cloud-based, highly available, and scalable online serving or streaming systems utilizing large, diverse data sets in production
  • Expertise with diverse data technologies like Databricks, PostgreSQL, GraphDB, NoSQL DB, Mongo, Cassandra, Elastic Search, Snowflake, etc.
  • Strength in the majority of commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Airflow, Kubernetes, Docker, Argo, Jenkins, or similar
  • Expertise with all aspects of data systems, including ETL, aggregation strategy, performance optimization, and technology trade-off
  • Understanding of data access patterns, streaming technology, data validation, data modeling, data performance, cost optimization
  • Experience defining data engineering/architecture best practices at a department and organizational level and establishing standards for operational excellence and code and data quality at a multi-project level
  • Strong passion for learning, always open to new technologies and ideas
  • AWS and Azure experience is preferred

Qualifications

See more jobs at Procore Technologies

Apply for this job

1d

Sr Data Engineer

BeyondTrustRemote Canada | Remote United States
scalapython

BeyondTrust is hiring a Remote Sr Data Engineer

Job Application for Sr Data Engineer at BeyondTrust{"@context":"schema.org","@type":"JobPosting","hiringOrganization":{"@type":"Organization","name":"BeyondTrust","logo":"https://recruiting.cdn.greenhouse.io/external_greenhouse_job_boards/logos/000/010/289/resized/Beyond_Trust.png?1555420135"},"title":"Sr Data Engineer","datePosted":"2024-03-26","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada","addressRegion":"ON","addressCountry":null,"postalCode":null}},"description":"\u003cp\u003eBeyondTrust is a place where you can bring your purpose to life through the work that you do, creating a safer world through our cyber security SaaS portfolio.\u003c/p\u003e\n\u003cp\u003eOur culture of flexibility, trust, and continual learning means you will be recognized for your growth, and for the impact you make on our success. You will be surrounded by people who challenge, support, and inspire you to be the best version of yourself.\u003c/p\u003e\n\u003cp\u003e\u003cu\u003eThe Role\u003c/u\u003e\u003c/p\u003e\n\u003cp\u003eAs a Senior Data Engineer at BeyondTrust, you will help build and enhance our data lake w

See more jobs at BeyondTrust

Apply for this job

3d

Data Engineer PySpark AWS

2 years of experienceagileBachelor's degreejiraterraformscalaairflowpostgressqloracleDesignmongodbjavamysqljenkinspythonAWS

FuseMachines is hiring a Remote Data Engineer PySpark AWS

Data Engineer PySpark AWS - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

3d

Data Engineer

ResultantIndianapolis, IN, Remote
nosqlpostgressqlDesignazurec++dockerkubernetespythonAWS

Resultant is hiring a Remote Data Engineer

Job Description

We are looking for Data Engineers to join our talented data analytics team. As a Data Engineer, you will work closely with many teams across our company on complex, advanced analytical projects to perform data sourcing, data profiling, and other data manipulation functions.  

You will be directly responsible for the solutions we build for our clients, addressing their business needs through requirements gathering and collaborating on solution reviews. We are looking for self-starters with the skills necessary to empathize with the clients’ needs, translate technical complexities, develop appropriate solutions, and contribute to the growth of our technology and data-driven company. 

Here’s what a typical day for you might look like: 

  • Work closely with the solution leads, project managers, data architects, and data scientists on solution design, architecture, and implementation 
  • Perform extraction, transformation, and loading of data from a wide variety of data sources using various data engineering tools and methods. 
  • Query and process large data sets and perform data profiling and data quality assessments. 
  • Design and implement data solutions for integration across systems that are both secure and operational. 
  • Assist in creating database models and architecture design and documentation 
  • Conduct research and development as well as contribute to the long-term positioning of and emerging technologies related to data sourcing, cleansing, and integration 
  • Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code. 
  • Improve operations by conducting systems analysis; recommending changes in policies and procedures. 
  • Participate in client-facing project activities such as requirements gathering, solution reviews, and explaining technical complexities and business benefits in layperson terms. 

Qualifications

Some of the qualifications and skills we are expecting include the following:

  • A Bachelor’s degree in Computer Science, Engineering or a similar field is required (Master’s a plus) 
  • 2+ years of data engineering, software engineering, or similar experience 
  • 2+ hands-on industry experience working with SQL on various relational databases/platforms (SQL Server, Snowflake, Synapse, Postgres, Databricks, etc). NoSQL a plus
  • 2+ experience implementing data pipelines/ETL solutions with tools like Data Factory, dbt, Matillion, etc. 
  • 2+ years of hands-on experience with object-oriented programming in Python (preferred) or similar such as Go, Rust, C#, etc. 
  • 2+ years of data modeling experience 
  • Strong verbal and communication skills 
  • Collaborative team player who is detailed oriented, focused on solution quality and execution 
  • Comfortable working across a wide range of project sizes and industries 
  • Familiarity or experience with cloud platforms such as AWS, Azure, or GCP a plus 
  • Experience with Docker for containerization and Kubernetes for orchestration a plus 

See more jobs at Resultant

Apply for this job

3d

Senior Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Senior Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant son expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur GCP, en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des requêtes SQL et des processus ETL pour garantir des temps de réponse rapides et une scalabilité.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Restez à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

Qualifications

  • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
  • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
  • Certification GCP (Google Cloud Platform) est un plus.
  • Très bonne communication écrite et orale (livrables et reportings de qualité)

See more jobs at Devoteam

Apply for this job

3d

Senior Data Engineer

BrazeRemote - Ontario
Bachelor's degreesqlDesign

Braze is hiring a Remote Senior Data Engineer

At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.

We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.

To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success. Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.

If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.

WHAT YOU’LL DO

Team Overview:

Join our dynamic team dedicated to revolutionizing data analytics for impactful decision-making. The team collaboratively shapes data strategies, optimizing analytics practices to drive business growth.

Responsibilities:

  • Lead the design, implementation, and monitoring of large-scale data warehouses.
  • Excel in SQL with proficiency in window functions, STRUCT/ARRAY manipulation, and query optimization.
  • Dive deep into product understanding, team roadmaps, technical architecture, and data flow.
  • Mentor data-savvy stakeholders on data best practices.
  • Expertly design data models (Snowflake, Star, Data Vault 2.0) for clean data structures.
  • Track downstream usage and feedback for continuous improvement.
  • Adhere to and promote performance best practices based on specific database engine requirements.
  • Embrace a passion for data cataloging, metadata management, and adherence to data governance principles.
  • Design systems with a test-driven approach for trapping bad-quality data and highlighting alerts.
  • Utilize tools like dbt for building efficient data transformation pipelines.
  • Partner effectively with engineering, data analysts, data scientists, and business stakeholders.

WHO YOU ARE

The ideal candidate for this role possesses:

  • 4+ years of hands-on experience in Snowflake and other cloud data warehouses.
  • Proven expertise in SQL, data modeling, and data governance principles using dbt.
  • A track record of leading impactful data projects.
  • Effective collaboration skills with cross-functional teams.
  • In-depth understanding of technical architecture and data flow.
  • Ability to mentor and guide stakeholders on data best practices.
  • Proficiency in schema design (Snowflake, Star, Data Vault 2.0) and large-scale data warehouse implementation.
  • Passion for clean data structures and continuous improvement.
  • Strong analytical and problem-solving skills.
  • Enthusiasm for SQL frameworks like dbt.
  • Expertise in owning and managing dbt projects.
  • Dedication to applying data governance principles at scale.

#LI-REMOTE

WHAT WE OFFER

Details of these benefits plan will be provided if a candidate receives an offer of employment. Benefits may vary by location.

From offering comprehensive benefits to fostering flexible environments, we’ve got you covered so you can prioritize work-life harmony.

  • Competitive compensation that may include equity
  • Retirement and Employee Stock Purchase Plans
  • Flexible paid time off
  • Comprehensive benefit plans covering medical, dental, vision, life, and disability
  • Family services that include fertility benefits and equal paid parental leave
  • Professional development supported by formal career pathing, learning platforms, and tuition reimbursement
  • Community engagement opportunities throughout the year, including an annual company wide Volunteer Week
  • Employee Resource Groups that provide supportive communities within Braze
  • Collaborative, transparent, and fun culture recognized as a Great Place to Work®

ABOUT BRAZE

Braze is a leading customer engagement platform that powers lasting connections between consumers and brands they love. Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty.

Braze is proudly certified as a Great Place to Work® in the U.S., the UK and Singapore. We ranked #3 on Great Place to Work UK’s 2024 Best Workplaces (Large), #3 on Great Place to Work UK’s 2023 Best Workplaces for Wellbeing (Medium), #4 on Great Place to Work’s 2023 Best Workplaces in Europe (Medium), #10 on Great Place to Work UK’s 2023 Best Workplaces for Women (Large), #19 on Fortune’s 2023 Best Workplaces in New York (Large). We were also featured in Built In's 2024 Best Places to Work, U.S. News Best Technology Companies to Work For, and Great Place to Work UK’s 2023 Best Workplaces in Tech.

You’ll find many of us at headquarters in New York City or around the world in Austin, Berlin, Chicago, Jakarta, London, Paris, San Francisco, Singapore, Sydney and Tokyo – not to mention our employees in nearly 50 remote locations.

BRAZE IS AN EQUAL OPPORTUNITY EMPLOYER

At Braze, we strive to create equitable growth and opportunities inside and outside the organization.

Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, national origin, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.

We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.

Please see ourCandidate Privacy Policy for more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.

See more jobs at Braze

Apply for this job

5d

Senior Data Engineer (UK REMOTE)

Turnitin LLCLondon, United Kingdom, Remote
4 years of experienceDesignazurejavaelasticsearchpythonAWS

Turnitin LLC is hiring a Remote Senior Data Engineer (UK REMOTE)

Job Description

Your role as a Senior Data Engineer entails a range of responsibilities, necessitating a balanced skillset:

  • AI Data Engineering: Design, build, operate and deploy real-time data pipelines at scale using AI techniques and best practices. Support Turnitin's AI R&D efforts by applying advanced data warehousing, data science, and data engineering technologies. Aim for automation to enable a faster time-to-market and better reusability of new AI initiatives.
  • Collaboration: Work in tandem with the AI R&D teams and the Data Platform Team to collect, create, curate and maintain high-quality AI datasets. Ensure alignment of data architecture and data models across different products and platforms.
  • Innovation: Unearth insights from Turnitin's rich data resources through innovative research and development.
  • Hands-on Involvement: Engage in data engineering and data science tasks as required to support the team and the projects. Conduct and own external data collection efforts - including state of the art prompt engineering techniques - to support the construction of state of the art AI models.
  • Communication: Foster clear communication within the team and the organization, and ensure understanding of the company's vision and mission.
  • Continuous Learning: Keep abreast of new tools and development strategies, bringing innovative recommendations to leadership.

Qualifications

  • At least 4 years of experience in data engineering, ideally focused on enabling and accelerating AI R&D.
  • Strong proficiency in Python, Java, and SQL.
  • Proficiency with Redshift, Hadoop, Elasticsearch, and cloud platforms (AWS, Azure, GCP).
  • Familiarity interacting with AI frameworks including PyTorch and TensorFlow and AI libraries such as Huggingface and Scikit-Learn.
  • Experience with Large Language Models (LLMs) and LLM APIs.
  • Strong problem-solving, analytical, and communication skills, along with the ability to thrive in a fast-paced, collaborative environment.

Desired Qualifications

  • 6+ years of experience in data engineering with a focus on AI and machine learning projects.
  • Experience in a technical leadership role.
  • Familiarity with natural language processing (NLP) techniques and tools.
  • Experience in the education or education technology sectors.
  • Experience with data visualization and data communications.

Characteristics for Success

  • As a Senior Data Engineer, you should possess:
  • A passion for creatively solving complex data problems.
  • The ability to work collaboratively and cross-functionally.
  • A continuous learning mindset, always striving to improve your skills and knowledge.
  • A proven track record of delivering results and ensuring a high level of quality.
  • Strong written and verbal communication skills.
  • Curiosity about the problems at hand, the field at large, and the best solutions.
  • Strong system-level problem-solving skills.

Apply for this job

7d

Data Engineer

phDataLATAM - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Data Engineer

Job Application for Data Engineer at phData

See more jobs at phData

Apply for this job

7d

Senior Data Engineer

SamsaraRemote - US
agilesqloracleDesignazureapidockerpostgresqlmysqlkubernetespythonAWSbackend

Samsara is hiring a Remote Senior Data Engineer

Who we are

Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale.

Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, Equipment Monitoring, and Site Visibility. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term. 

Recent awards we’ve won include:

Glassdoor's Best Places to Work 2024

Best Places to Work by Built In 2024

Great Place To Work Certified™ 2023

Fast Company's Best Workplaces for Innovators 2023

Financial Times The Americas’ Fastest Growing Companies 2023

We see a profound opportunity for data to improve the safety, efficiency, and sustainability of operations, and hope you consider joining us on this exciting journey. 

Click hereto learn more about Samsara's cultural philosophy.

About the role:

Data and Analytics is a critical team within Business Technology. Our mission is to enable integrated data layers for all of Samsara and Samsara customers with the insights, tools, infrastructure and consultation to make data driven decisions. We are a growing team that loves all things data! The team will be composed of data engineers, architects, analysts and data scientists. We are passionate about leveraging world class data and analytics to deliver a great customer experience.  

Our team promotes an agile, collaborative, supportive environment where diverse thinking, innovative design, and experimentation is welcomed and encouraged.

You should apply if:

  • You want to impact the industries that run our world: Your efforts will result in real-world impact—helping to keep the lights on, get food into grocery stores, reduce emissions, and most importantly, ensure workers return home safely.
  • You are the architect of your own career: If you put in the work, this role won’t be your last at Samsara. We set up our employees for success and have built a culture that encourages rapid career development, countless opportunities to experiment and master your craft in a hyper growth environment.
  • You’re energized by our opportunity: The vision we have to digitize large sectors of the global economy requires your full focus and best efforts to bring forth creative, ambitious ideas for our customers.
  • You want to be with the best: At Samsara, we win together, celebrate together and support each other. You will be surrounded by a high-caliber team that will encourage you to do your best. 

Click hereto learn about what we value at Samsara. 

In this role, you will:

  • Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara’s Data Platform to enable advanced automation and analytics.
  • Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
  • Manage critical data pipelines to enable our growth initiatives and advanced analytics.
  • Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
  • Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
  • Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices. 

Minimum requirements for the role:

  • A Bachelor’s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
  • 5+ years of work experience as a data engineer, including 3+ years of experience in designing, developing, testing, and maintaining E2E data pipelines.. 
  • Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
  • Experience with the following:
    • Languages: Python, SQL.
    • Exposure to ETL tools such as Fivetran, DBT or equivalent.
    • API: Exposure to python based API frameworks for data pipelines. 
    • RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
    • Cloud: AWS, Azure and/or GCP.
    • Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.

An ideal candidate has:

  • Comfortable in working with business customers to gather requirements and gain a deep understanding of varied datasets.
  • A self-starter, motivated, responsible, innovative and technology-driven person who performs well both solo and as a team member.
  • A proactive problem solver and have good communication as well as project management skills to relay your findings and solutions across technical and non technical audiences.
  • ETL and Orchestration Experience.
  • Fivetran, Alteryx or equivalent.
  • DBT or equivalent.
  • Logging and Monitoring: One or more of Splunk, DataDog, AWS Cloudwatch or equivalent.
  • AWS Serverless: AWS API Gateway, Lambda, S3, SNS, SQS, SecretsManager.
  • Other: Docker, Kubernetes, AWS ECR, AWS Fargate, AWS IAM.

Samsara’s Compensation Philosophy:Samsara’s compensation program is designed to deliver Total Direct Compensation (based on role, level, and geography) that is at or above market. We do this through our base salary + bonus/variable + restricted stock unit awards (RSUs) for eligible roles.  For eligible roles, a new hire RSU award may be awarded at the time of hire, and additional RSU refresh grants may be awarded annually. 

We pay for performance, and top performers in eligible roles may receive above-market equity refresh awards which allow employees to achieve higher market positioning.

The range of annual base salary for full-time employees for this position is below. Please note that base pay offered may vary depending on factors including your city of residence, job-related knowledge, skills, and experience.
$121,380$204,000 USD

At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems. We are committed to increasing diversity across our team and ensuring that Samsara is a place where people from all backgrounds can make an impact.

Benefits

Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, Samsara for Good charity fund, and much, much more. Take a look at our Benefits site to learn more.

Accommodations 

Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click hereif you require any reasonable accommodations throughout the recruiting process.

Flexible Working 

At Samsara, we haveadopted a flexible way of working, enabling teams and individuals to do their best work, regardless of where they’re based. We value in-person collaboration and know a change of scenery and quiet space to work is welcomed from time to time, but also appreciate that the world of work has changed. Our offices remain open for those who prefer to collaborate or work in-office, but we also encourage fully remote applicants.As most roles are not required to be in the office, we are able to hire remotely where Samsara has an established presence. If a role is required to be in a certain location and candidates do not have work authorization for that location, Samsara will conduct an immigration assessment. If the role is not required to be in a specific location, Samsara will move forward with the remote location that works best for the business. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company. 

Fraudulent Employment Offers

Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in ‘@samsara.com’ or ‘@us-greenhouse-mail.io’. For more information regarding fraudulent employment offers, please visit our blog post here.

Apply for this job

10d

Senior Data Engineer

InstacartCanada - Remote
airflowsqlDesign

Instacart is hiring a Remote Senior Data Engineer

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

Overview

 

At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

About the Role 

 

The Finance data engineering team plays a critical role in defining how financial data is modeled and standardized for uniform, reliable, timely and accurate reporting. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping the post-IPO financial data vision and roadmap for the company.

 

About the Team 

 

Finance data engineering is part of the Infrastructure Engineering pillar, working closely with accounting, billing & revenue teams to support the monthly/quarterly book close, retailer invoicing and internal/external financial reporting. Our team collaborates closely with product teams to capture critical data needed for financial use cases.

 

About the Job 

  • You will be part of a team with a large amount of ownership and autonomy.
  • Large scope for company-level impact working on financial data.
  • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
  • You will ship high quality, scalable and robust solutions with a sense of urgency.
  • You will have the freedom to suggest and drive organization-wide initiatives.

 

About You

Minimum Qualifications

  • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
  • Expert with SQL and  knowledge of Python.
  • Experience building high quality ETL/ELT pipelines.
  • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
  • Experience building data pipelines for accounting/billing purposes.
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
  • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
  • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
  • Experience working with a large codebase on a cross functional team.

 

Preferred Qualifications

  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Snowflake, dbt (data build tool) and Airflow
  • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo etc

 

#LI-Remote

See more jobs at Instacart

Apply for this job

11d

Azure Data Engineer

PSI CRORemote, REMOTE
scalasqlDesignazureapipython

PSI CRO is hiring a Remote Azure Data Engineer

Job Description

PSI CRO are looking for a hands-on, experienced Azure Data Engineer who is a visionary, self-directed and comfortable supporting the various data & analytics needs of multiple teams, systems, and products. In this role, you will be responsible for defining high volume data processing pipelines and dataflows in a multi-tenant environment. You will also be verifying data integrity and curating data for consumption by data warehouse, data analytics tools and machine learning models.

Requirements:

  • Participate in architecture design and implementation of high-performance, scalable, and optimized data solutions.
  • Data Modelling and pipeline development to load models from multiple data sources.
  • Design, build and automate the deployment of data pipelines and applications to support data scientists and researchers with their reporting and data requirements
  • Provide input on Azure technologies and industry best practices in the field of data warehouse architecture and modelling.
  • Collaborate with internal business units and data science teams on business requirements, data access, processing/transformation and reporting needs and leverage existing and new tools to provide solutions
  • Maintain, monitor, upgrade and secure the SQL Server/Azure platform in partnership with established vendor.
  • Develop ETL (extract, transform and load) processes to populate Data Marts and Warehouses
  • Develop systems integrations across between traditional databases and modern Cloud APIs.
  • Design, model and develop across both Relational Databases and Data Warehouse and Synapse
  • Troubleshoot data integrations/data feeds between systems.

Qualifications

  • Advanced SQL development experience
  • 10+ years of IT experience in the data engineering space or similar
  • Must have experience with at least one end to end implementation of Azure cloud data warehouse
  • Experience with Lakehouse architecture and design for multi-tenant, OLTO data modeling, dimensional data modeling, composite modeling, data transformation analytical data structures for analytics and statistical model processing. 
  • Experience with Azure SQL Server technologies including Synapse SQL Dedicated and On-Demand pools. 
  • Experience with Azure products and services including Azure Data Lake Gen2, Azure Pipelines, Azure Databricks, Databricks API, Databricks error loggins, Azure SQL Serve and Azure Analysis Services. 
  • Experience with data integration through API's, Web Services, SOAP and/or REST Services. 
  • Experience in Power BI, Power BI Services, Power BI Gateway and Power BI Dataflow
  • Additional programming experience is a plus (Python, Scala, R). 

See more jobs at PSI CRO

Apply for this job

12d

Lead Data Engineer

HatchWorks Technologies%LABEL_MULTIPLE_LOCATIONS% (5) Remote
agileoracleDesignscrumgitpostgresqlmysqlpythonAWS

HatchWorks Technologies is hiring a Remote Lead Data Engineer

We are HatchWorks Technologies

We are innovators, technologists, and builders - all dedicated to creating intelligent, purpose-built software products and solutions that improve the way people work and live. Our solutions drive revenue, market share, operational efficiencies, and, most importantly, delightful user experiences for industry leaders in healthcare, financial services, and communications, to name a few.

Our key differentiator is our product-centric approach, putting the end-user first. You will work with user-obsessed experts who always start with “why” before “what” and aspire to build feasible solutions that are viable for our customers' business and valuable for the end user. We focus on outcomes over output and believe in accelerating time to value for our customers in an agile, focused, collaborative manner. The fabric behind all of this is our people, culture, and core values, holding us all accountable to each other.

About the Role

HatchWorks is searching for an experienced data-driven Lead Data Engineer with deep knowledge in developing enterprise data solutions to solve mission-critical business needs for our clients. A Lead Data Engineer within HatchWorks will deliver successful projects by providing skilled technical expertise, leveraging strong interpersonal communication skills, and fostering deep collaboration in an Agile software development environment. This role requires a comprehensive background encompassing roles such as Data Engineer, Data Architect, Data Analyst, or Machine Learning Engineer, with a proven track record of leading and mentoring a team of data-related engineers. The ideal candidate excels in engaging with product owners and stakeholders to accurately capture and define user stories, ensuring project requirements are met effectively and in accordance with engineering best practices.

Responsibilities:

  • Create and enhance data solutions enabling seamless delivery of data, responsible for collecting, parsing, managing, and analyzing large sets of data across different domains.
  • Design and develop data pipelines, data ingestion, and ETL processes that are scalable, repeatable, and secure to meet stakeholder needs.
  • Build data architecture to support data management strategies, supporting business intelligence initiatives and actionable insights.
  • Develop real-time and batch ETL data processes aligned with business needs, manage and augment data pipelines from raw OLTP databases to data solution structures.
  • Support the Agile Scrum team with planning, scoping, and creation of technical solutions for new product capabilities, ensuring continuous delivery to production.

Qualifications:

  • Experience: 10+ years in roles such as Data Engineer, Data Architect, Data Analyst, or Machine Learning Engineer.
  • Proven track record of leading and mentoring a team of Data related engineers.
  • Experience spearheading agile solution delivery to clients.
  • Strong engagement capabilities with product owners and stakeholders for defining user stories and project requirements.
  • Familiarity with Snowflake, Python, PostgreSQL, MySQL, Oracle, and AWS Athena.
  • Knowledge of AWS services such as S3, Lambdas, Fargate, Step Functions, SQS, SNS, and CloudWatch.
  • Experience with Git using Gitlab devops platform.
  • Proficiency with CI/CD pipelines using Jenkins.
  • Data modeling skills, both dimensional and relational.
  • Agile software delivery methodologies understanding.

Technical Skills:

  • 5+ years of experience using Snowflake.
  • 2+ years of Python experience and handling of CSV, JSON, and Parquet files using boto3 and pandas.
  • Expertise in building data ingestion pipelines.
  • Knowledge of relational database skills, including the creation of queries and stored procedures.
  • Ability to read, write, understand, and speak English at a B2 level or higher.

See more jobs at HatchWorks Technologies

Apply for this job

12d

Data Analytics Engineer

seedtagSpain Remote
tableausqlDesignazureAWS

seedtag is hiring a Remote Data Analytics Engineer

We are offering a Data Analytics Engineerposition, to build the global contextual advertising leader.

WHO WE ARE

Seedtag is the leading Contextual Advertising Platform. Our proprietary, machine learning-based technology provides human-like understanding of the content in the web, the highest level of brand safety in the industry and unmatched cookieless targeting capabilities.

We engage with the market on both demand and supply side to create, activate and launch high-quality advertising campaigns at scale. We are committed to creating a more beautiful, respectful and engaging way to do advertising.

KEY FIGURES

2014 · Founded by two ex-Googlers

2021 · Fundraising round of 40M€ & +10 countries & +230 Seedtaggers

2022 · Fundraising round of 250M€ + expansion into the U.S market

2023 · Expansion into 15 countries + 500 Seedtaggers

YOUR CHALLENGE

  • Design and implement robust data models to support complex ad campaign measurement and analysis
  • Leverage your expertise in data warehouse architectures (e.g., Snowflake, Redshift) to ensure efficient data storage, retrieval, and scalability
  • Utilize dbt (Data Build Tool) to automate data transformation processes and maintain data quality
  • Develop and maintain the semantic layer, acting as a single source of truth for business users to access and understand campaign data
  • Collaborate with data scientists, analysts, and business stakeholders to translate business needs into technical requirements
  • Build and maintain data pipelines to ingest data from various sources (ad servers, DSPs, etc.)
  • Monitor and troubleshoot data quality issues, ensuring the accuracy and consistency of information
  • Create and maintain comprehensive data documentation

YOU WILL SUCCEED IN THIS ROLE IF

  • Minimum 3+ years of experience as an Analytics Engineer or similar data-focused role
  • Strong understanding of data modeling concepts and principles
  • Proven experience designing and implementing data warehouse solutions
  • Proficiency in SQL and experience working with relational databases
  • In-depth knowledge of dbt and its capabilities in data transformation
  • Familiarity with the concept of a semantic layer and its importance in data analysis
  • Excellent communication and collaboration skills
  • Ability to work independently and manage multiple projects simultaneously

It would be a big plus if:

  • Experience working in the ad tech industry
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure)
  • Experience with data visualization tools (e.g., Tableau, Power BI, Looker)

SEEDTAG PERKS

???? Key moment to join Seedtag in terms of growth and opportunities

???? One Seedtag: Work for a month from any of our open offices with travel and stay paid.

???? Build your home office with a gross budget of up to 1K€ (external screen, chair, table...)

???????????????????????? Optional company-paid English, Spanish and/or French courses

???? Odilo online school, where you can learn as much as you want

???? We love what we do, but we also love having fun. We have many team activities you can join and enjoy with your colleagues!

BENEFITS OF WORKING AT SEEDTAG

  • Growth: International, highly demanding work environment in one of the fastest growing AdTech companies in Europe. We reject "that’s the way it’s always been done". In Seedtag you can find an energetic, fresh workplace, multicultural work environment where our members are from different countries in Europe, LATAM, US and so many more!
  • Impact: The chance to have a direct impact, here you don't work for the sake of working, we all have an impact on Seedtag in our own way, rowing in the same direction
  • Diversity of methodology and people: Seedtag DNA is unique and highly appreciated by very different types of Seedtagers. We embrace diversity and encourage everyone to seek the best version of themselves and to show who they really are. With a totally flexible methodology
  • Flexibility:At Seedtag, we trust you, you can work from home, the beach or the office in our hybrid mode

Are you ready to join the Seedtag adventure? Then send us your CV!

See more jobs at seedtag

Apply for this job

12d

Business Intelligence Data Engineer

ExperianHeredia, Costa Rica, Remote
agileBachelor's degreetableausqlDesignazurepostgresqlmysqlAWS

Experian is hiring a Remote Business Intelligence Data Engineer

Job Description

We are seeking a dynamic professional to fill the role of Business Intelligence Data Engineer to support our operational efficiency program and business operations. In this multifaceted role, you will be responsible for designing, developing, and maintaining our team’s data architecture, integration solutions, and Tableau-based business intelligence (BI) reporting and visualization.

Responsibilities:

Database Design and Management:

  • Design, develop, and maintain the organization's database architecture, ensuring optimal performance, scalability, and reliability.
  • Define and implement data models, schemas, and structures to support business requirements and data analysis needs.
  • Collaborate with stakeholders to understand data requirements and translate them into actionable database designs.

ETL Development:

  • Develop and maintain robust Extract, Transform, Load (ETL) processes to integrate data from multiple sources into the database.
  • Design and implement data transformation workflows to ensure data consistency, accuracy, and integrity.
  • Optimize ETL processes for performance and efficiency, minimizing processing time and resource utilization.

Data Integration and Management:

  • Implement data integration solutions to facilitate seamless data flow between disparate systems, applications, and databases.
  • Develop and maintain data pipelines to automate the extraction, transformation, and loading of data from source systems to the target database.
  • Ensure data quality and consistency throughout the integration process, implementing data validation and error handling mechanisms.

Tableau Development and BI Reporting:

  • Develop interactive dashboards, reports, and visualizations in Tableau to present data insights and analysis.
  • Collaborate with business analysts and stakeholders to understand reporting requirements and translate them into effective visualizations.
  • Implement best practices for data visualization, including storytelling, interactivity, and usability.

Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Proven experience in database architecture, ETL development, data engineering, and Tableau development roles.
  • Proficiency in database management systems (e.g., SQL Server, MySQL, PostgreSQL) and ETL tools (e.g., AWS Glue, Apache Spark, Talend).
  • Strong understanding of data modeling principles, database design concepts, and data integration techniques.
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) and related services for data storage and processing.
  • Proficiency in Tableau Desktop and Tableau Server, with a strong understanding of Tableau features and functionalities.
  • Knowledge of data visualization best practices and principles, with the ability to create compelling and insightful visualizations.
  • Familiarity with SQL and relational databases for data querying and manipulation.
  • Experience working in an agile environment and collaborating with cross-functional teams.
  • Tableau certification(s) preferred, but not required.
  • Excellent problem-solving skills and the ability to troubleshoot complex data issues.
  • Effective communication and collaboration skills, with the ability to work across teams and communicate technical concepts to

See more jobs at Experian

Apply for this job

13d

Sr Data Engineer

NationsBenefitsPlantation, FL Remote
Bachelor's degreetableausqlDesignpython

NationsBenefits is hiring a Remote Sr Data Engineer

At NationsBenefits, we are committed to helping health plan members achieve a better quality of life through supplemental benefit solutions. We are also passionate about supporting the goals of our associates and helping them do their best work. Together, we can make a meaningful and measurable difference in the lives of millions. That is something we can all be proud of.

It all begins with how we care about the people we serve. Since 2015, our mission has guided our principles towards delivering solutions for a rapidly changing industry. Compassionate Care is at the center of all we do, and it unites us to foster an environment where everyone is empowered, inspired, and equipped for success.

We offer a fulfilling work environment that attracts top talent and encourages all associates to do their part in delivering premier service to internal and external customers alike. It’s how we’re transforming the healthcare industry for the better. We provide career advancement opportunities from within the organization with multiple locations in Florida, California, Pennsylvania, Tennessee, Texas, Utah, and India.

You might also like to know that NationsBenefits is also recognized as one of the fastest-growing companies in America. We’re proud of how far we’ve come, and a career with us gives you growth opportunities, too.

Job Overview:

We are seeking an experienced Data Engineer with hands-on experience in data analysis, data engineering, and analytics tools. You will be responsible for designing, developing, and maintaining our analytics infrastructure, ensuring data accuracy, and enabling data-driven decision-making across the organization.

Key Responsibilities:

  • Data Pipeline Development: Design, develop, and maintain ETL (Extract, Transform, Load) pipelines to collect and process data from various sources into a centralized data warehouse.
  • Data Modeling: Create and maintain data models, data dictionaries, and documentation to support efficient data analysis and reporting.
  • Data Analysis: Perform exploratory data analysis, develop data visualizations, and generate actionable insights to support business decision-making.
  • Reporting and Dashboard Development: Build interactive and informative reports and dashboards using tools such as Tableau, Power BI, or similar platforms.
  • Data Quality Assurance: Implement data validation and cleansing processes to ensure data accuracy, consistency, and reliability.
  • Performance Optimization: Continuously monitor and optimize data pipelines and analytics processes for efficiency and scalability.
  • Collaboration: Collaborate with cross-functional teams, including data scientists, business analysts, and stakeholders, to understand their data needs and deliver solutions.
  • Data Governance: Ensure data security, compliance, and governance standards are met, and contribute to data governance initiatives.
  • Stay Current: Stay up-to-date with emerging trends and technologies in data engineering and analytics, and make recommendations for their adoption.

Critical Skills and Experience:

  • Bachelor's degree in Computer Science, Information Technology, or a related field or equivalent experience.
  • 3+ years of experience as a Data Engineer or Analytics Engineer or similar role.
  • Proficiency in SQL, Python, or other relevant programming languages.
  • Strong experience with data warehousing concepts and tools preferably including Databricks
  • Experience with ETL tools and processes preferably including DBT
  • Familiarity with data visualization tools.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and teamwork abilities.
  • Knowledge of data security and compliance standards is a plus.

We offer a competitive salary, comprehensive benefits package, and a dynamic work environment. We are seeking an individual with a strong passion for Data. If you meet these qualifications and are committed to delivering exceptional results for our customers, we encourage you to apply for this exciting and impactful opportunity today at NationsBenefits.com.

NationsBenefits is an Equal Opportunity Employer

See more jobs at NationsBenefits

Apply for this job

13d

Senior Data Engineer

BloomreachRemote CEE, Czechia, Slovakia
remote-firstc++kubernetespython

Bloomreach is hiring a Remote Senior Data Engineer

Bloomreach is the world’s #1 Commerce Experience Cloud, empowering brands to deliver customer journeys so personalized, they feel like magic. It offers a suite of products that drive true personalization and digital commerce growth, including:

  • Discovery, offering AI-driven search and merchandising
  • Content, offering a headless CMS
  • Engagement, offering a leading CDP and marketing automation solutions

Together, these solutions combine the power of unified customer and product data with the speed and scale of AI optimization, enabling revenue-driving digital commerce experiences that convert on any channel and every journey. Bloomreach serves over 850 global brands including Albertsons, Bosch, Puma, FC Bayern München, and Marks & Spencer. Bloomreach recently raised $175 million in a Series F funding round, bringing its total valuation to $2.2 billion. The investment was led by Goldman Sachs Asset Management with participation from Bain Capital Ventures and Sixth Street Growth. For more information, visit Bloomreach.com.

 

We want you to join us as a full-timeSenior Data Engineer into our Data Pipelineteam. We work remotely first, but we are more than happy to meet you in our nice office in Bratislava or Brno. And if you are interested in who will be your engineering manager, check out Vaclav's Linkedin.

Intrigued? Read on ????…

Your responsibilities

  • You will develop and maintain Data LakeHouse on top of GCP platform using Apache IceBerg, BigQuery, BigLake tables, DataPlex and DataProc in the form ofApacheSpark/Flink with open file formats like AVRO and Parquet
  • You will help to maintain a streaming mechanism on how data from Apache Kafka gets into the Data LakeHouse
  • You will optimise the Data LakeHouse for near-real-time and non-real-time analytical use-cases primarily for customer activation and scenarios/campaign evaluation 
  • You should help with areas like data discovery and managed access to data through the data governance layer and data catalog using DataPlex so our engineering teams can leverage from this unified Data LakeHouse
  • You feel responsible for DataModeling and schema evolution
  • You should help us with adopting the concepts from Data Fabrics and Data Mesh to run data as a product to unlock the potential the data can unleash for our clients
  • You should bring expertise into the team from similar previous projects to influence how we adopt and evolve the concepts mentioned above and as an addition to that to topics like Zero-copy or reverse ETL to increase the ease of integration with client’s platforms
  • You will also help to maintain the existing data exports to Google’s BigQuery using google’s DataFlows and Apache Beam 
  • You will help us run and support our services in production handling high-volume traffic using Google Cloud Platform and Kubernetes.
  • You will review the code of your peers and they'll review yours. We have high code quality standards and the four-eyes principle is a must!

Your qualifications

  • You have production experience with building and operating a DataLake, Data Warehouse or Data LakeHouses
  • You have a taste for big data streaming, storage and processing using open source technologies
  • You can demonstrate your understanding of what it means to treat data as a product
  • You know what are Data Mashes and Data Fabrics and what is critical to make sure for them to bring value
  • You are able to learn and adapt. It'll be handy while exploring new tech, navigating our not-so-small code base, or when iterating on our team processes.
  • You knowdata structures,you knowPython and (optionaly) Go.

Our tech stack

  • Google Cloud Platform, DataFlow, Apache Beam, BigQuery, BigLake Table
  • Open formats IceBerg, Avro, Parquet
  • DataProc, Spark, Flink, Presto
  • Python, GO
  • Apache Kafka, Kubernetes, GitLab
  • BigTable, Mongo, Redis
  • … and much more ????

Compensations

  • Salary range starting from 3500 EUR gross per month,going up depending on your experience and skills
  • There's a bonus based on company performance and your salary.
  • You will be entitled to restricted stock options ????that will truly make you a part of Bloomreach.
  • You can spend 1500 USD per year on the education of your choice (books, conferences, courses, ...).
  • You can count on free access to Udemy courses.
  • We have 4 company-wide disconnect days throughout the year during which you will be encouraged not to work and spend a day with your friends and family "disconnected".
  • You will have extra 5 days of paid vacation????. Extra days off for extra work-life balance ????.
  • Food allowance!
  • Sweet referral bonus up to 3000 USD based on the position.

Your success story.

  • During the first 30 days, you will get to know the team, the company, and the most important processes. You’ll work on yourfirst tasks. We will help you to get familiar with our codebase and our product.
  • During the first 90 days, you will participate in yourfirst, more complex projects. You will help the team to find solutions to various problems, break the solution down into smaller tasks and participate in implementation. You will learn how we identify problems, how we prioritize our efforts, and how we deliver value to our customers.
  • During the first 180 days, you’ll become an integral part of the team. You will achieve the first goals we will set together to help you grow and explore new and interesting things. You will help us to deliver multi-milestone projects bringing great value to our customers. You will help us mitigate your first incidents and eventually even join the on-call rotation. You will get a sense of where the team is heading and you’ll help us toshape our future.
  • Finally, you’ll find out that our values are truly lived by us ????. We are dreamers and builders. Join us!

 

More things you'll like about Bloomreach:

Culture:

  • A great deal of freedom and trust. At Bloomreach we don’t clock in and out, and we have neither corporate rules nor long approval processes. This freedom goes hand in hand with responsibility. We are interested in results from day one. 

  • We have defined our5 valuesand the 10 underlying key behaviors that we strongly believe in. We can only succeed if everyone lives these behaviors day to day. We've embedded them in our processes like recruitment, onboarding, feedback, personal development, performance review and internal communication. 

  • We believe in flexible working hours to accommodate your working style.

  • We work remote-first with several Bloomreach Hubs available across three continents.

  • We organize company events to experience the global spirit of the company and get excited about what's ahead.

  • We encourage and support our employees to engage in volunteering activities - every Bloomreacher can take 5 paid days off to volunteer*.
  • TheBloomreach Glassdoor pageelaborates on our stellar 4.6/5 rating. The Bloomreach Comparably page Culture score is even higher at 4.9/5

Personal Development:

  • We have a People Development Program -- participating in personal development workshops on various topics run by experts from inside the company. We are continuously developing & updating competency maps for select functions.

  • Our resident communication coachIvo Večeřais available to help navigate work-related communications & decision-making challenges.*
  • Our managers are strongly encouraged to participate in the Leader Development Program to develop in the areas we consider essential for any leader. The program includes regular comprehensive feedback, consultations with a coach and follow-up check-ins.

  • Bloomreachers utilize the $1,500 professional education budget on an annual basis to purchase education products (books, courses, certifications, etc.)*

Well-being:

  • The Employee Assistance Program -- with counselors -- is available for non-work-related challenges.*

  • Subscription to Calm - sleep and meditation app.*

  • We organize ‘DisConnect’ days where Bloomreachers globally enjoy one additional day off each quarter, allowing us to unwind together and focus on activities away from the screen with our loved ones.

  • We facilitate sports, yoga, and meditation opportunities for each other.

  • Extended parental leave up to 26 calendar weeks for Primary Caregivers.*

Compensation:

  • Restricted Stock Units or Stock Options are granted depending on a team member’s role, seniority, and location.*

  • Everyone gets to participate in the company's success through the company performance bonus.*

  • We offer an employee referral bonus of up to $3,000 paid out immediately after the new hire starts.

  • We celebrate work anniversaries -- Bloomversaries!*

(*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)

If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!


Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.

 #LI-Remote

See more jobs at Bloomreach

Apply for this job

14d

Senior Data Engineer

Bosch GroupPlymouth, MI, Remote
agilenosqloracleansiblejavajenkinspython

Bosch Group is hiring a Remote Senior Data Engineer

Job Description

We are on the mission to turn latest technology into outstanding Bosch products and services. In our team we are developing the perception of the next generation of automatic parking systems. We are driving the development of the computer vision, ultrasonic sensor perception and creating the necessary SW to turn the sensor raw information into a vector space representation.

This vector space representation is the foundation for industry leading automatic parking functions. With our latest sensor generations, we have successfully introduced machine learning in our products. This was the first step on an exciting journey. There is a lot of opportunity ahead.

When you are the kind of person who combines a deep software engineering background with a we “can make it happen” attitude, let’s have a more detailed chat.

  • As a Senior Data Engineer you will develop and operate data pipelines delivering data from our engineering and the customer fleet to power our machine learning pipelines and provide data for decision making.
  • You will shape the future of automatic parking systems and establish best practices for embedded AI projects.
  • You will enable function developers to make use of your pipeline artifacts
  • As part of an agile team your ideas will be heard and impact the decision-making process. With our goal to invent for life, you will work on solutions that are both, innovative and ethical.
  • You will collaborate with scientists, electrical engineers, and machine learning engineers, ML Ops engineers to have real world impact.
  • Lifelong learning is crucial for long term success and we encourage you to stay current with latest research by visiting conferences and sharing your knowledge throughout the enterprise.

Qualifications

Basic Qualifications:

  • Education: Bachelor's or Master's degree in Computer Science, Electric Engineering, other Engineering discipline or foreign equivalent
  • 3+ years of experience with building data pipelines within a Cloud or Cloud-hybrid setup; in-depth understanding of relational database systems (e.g. Oracle, MS SQLServer).
  • 3+ years of experience with distributed computing frameworks (e.g. k8s, Spark)
  • 3+ years of experience in object-oriented software development, (e.g. Python, Java, or Go).
  • 2+ years of experience with Linux.

Preferred:

  • Education: successfully completed Master's degree in Computer Science or other engineering discipline
  • Experience with recent non-relational storage technologies (NoSQL and distributed)
  • Experience with workflow automation tools (e.g. Jenkins, Ansible)
  • Experience with ECU SW re-simulation
  • Experience with various messaging systems (e.g. Kafka)
  • Experience in designing data models and choice of respective data formats
  • Experience with in-vehicle data collection skills, structured and analytical connectivity

See more jobs at Bosch Group

Apply for this job

15d

Senior Data Engineer

scalanosqlsqlDesignazurepython

K2 Integrity is hiring a Remote Senior Data Engineer

Senior Data Engineer - K2 Integrity - Career Page (function(d, s, id) { var js, iajs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)){return;}

See more jobs at K2 Integrity

Apply for this job

16d

Lead Data Integration Engineer

O'Reilly MediaRemote, United States
4 years of experienceagilepostgressqlRabbitMQDesignazuredockerkubernetesjenkinspythonAWS

O'Reilly Media is hiring a Remote Lead Data Integration Engineer

Description

About O’Reilly Media              
         
O’Reilly’s mission is to change the world by sharing the knowledge of innovators. For over 40 years, we’ve inspired companies and individuals to do new things—and do things better—by providing them with the skills and understanding that’s necessary for success.                  
         
At the heart of our business is a unique network of experts and innovators who share their knowledge through us. O’Reilly Learning offers exclusive live training, interactive learning, a certification experience, books, videos, and more, making it easier for our customers to develop the expertise they need to get ahead. And our books have been heralded for decades as the definitive place to learn about the technologies that are shaping the future. Everything we do is to help professionals from a variety of fields learn best practices and discover emerging trends that will shape the future of the tech industry.         
         
Our customers are hungry to build the innovations that propel the world forward. And we help you do just that.               
         
Learn more:https://www.oreilly.com/about/                           
           
Diversity            
         
At O’Reilly, we believe that true innovation depends on hearing from, and listening to, people with a variety of perspectives. We want our whole organization to recognize, include, and encourage people of all races, ethnicities, genders, ages, abilities, religions, sexual orientations, and professional roles.         
         
Learn more:https://www.oreilly.com/diversity                          

 

About the Team                 

Our data platform team is dedicated to establishing a robust data infrastructure, facilitating easy access to quality, reliable, and timely data for reporting, analytics, and actionable insights. We focus on designing and building a sustainable and scalable data architecture, treating data as a core corporate asset. Our efforts also include process improvement, governance enhancement, and addressing application, functional, and reporting needs. We value teammates who are helpful, respectful, communicate openly, and prioritize the best interests of our users. Operating across various cities and time zones in the US, our team fosters collaboration to deliver work that brings pride and fulfillment.               

About the Job                 

We are seeking a skilled and thoughtful Lead Data Integration Engineer to contribute to the design and development of a modern data platform. The ideal candidate will possess a deep understanding of modern data platform concepts, will develop and support data integration strategies that aligns with the organization goals. The candidate will work hand in hand with the data architect and lead team members. Responsibilities include overseeing implementation of data framework covering data integration services such as profiling, ingestion, transformation, quality, and data operations management.                            

The Lead Data Integration Engineer will be comfortable with building software that interacts with a diverse range of data. Additionally, the Lead Data Integration Engineer will create tools for delivering analytics data within O’Reilly, aiding decision-making, and enhancing product features. These tools encompass RESTful web services, custom analytics dashboards, and data visualization.                            

Our ETL platform primarily uses BigQuery, Pub/Sub, Talend, Python, and PostgreSQL. We develop and support RESTful web applications in Django, Redshift, Hadoop, and Spark for higher volume data ETL and analysis. Containerization is integral to our approach, employing Docker, Jenkins, and Kubernetes for building, deploying, and managing a diverse range of services.As part of our ongoing initiatives, we are migrating our data platform and services to the cloud GCP environment. The candidate will oversee legacy and new data platform initiatives.                            

Salary Range:$155,000-$170,000            

What You'll Do                    

TheLead Data Integration Engineerwill:                          

  • Develop and Implement data integration strategies aligned with organization’s goals and objectives     
  • Identify opportunities to streamline data workflows and enhance data quality            
  • Oversee the integration of various data sources and ensure seamless data flow across different systems within the organization     
  • Collaborate with the Architect and enforce ETL best practices and oversee code reviews of the team
  • Collaborate with cross-functional teams, including data engineers, data analysts and business stakeholders, to understand data requirements and deliver integrated solutions that meet business needs        
  • Monitor and Optimize data integration processes for performance, scalability and efficiency             
  • Lead a team of data integration and data support professionals. Provide guidance, set priorities, mentor and support team members to ensure successful project delivery            
  • Oversee and maintain documentation for data integration processes, including data mappings, transformations and data lineage     

What You'll Have                                  

Required:                           

  • Bachelor’s degree in Computer Science or related field              
  • In lieu of degree, equivalent education and/or experience may be considered             
  • 4 years of experience leading teams in data warehousing and/or data engineering             
  • Proven experience in data integration, ETL development and data warehousing             
  • Strong technical skills in GCP, Talend, BigQuery             
  • Deep understanding of SQL, Shell scripting, and Python             
  • Experience with Agile software development lifecycle              
  • Experience with Django, Pub/Sub, Hadoop, Spark and Kubernetes             
  • Knowledge on Postgres, Redshift, RabbitMQ, Jenkins and Docker
  • Knowledge of BI tools such as Qlik Sense or Looker
  • Knowledge of data governance principles and regulatory requirements             
  • A high level of comfort with DevOps processes
  • Excellent leadership and communication skills
  • Ability to work effectively in a fast-paced, dynamic environment     
          
Preferred:          
  • Experience with BI tools such as Qlik Sense or Looker
  • Relevant certifications in data integration (ex. Talend Data Integration) and cloud technologies (ex. GCP, AWS, Azure) are a plus        

          

See more jobs at O'Reilly Media

Apply for this job