Data Engineer Remote Jobs

109 Results

12h

Senior Data Engineer

PlentificLondon,England,United Kingdom, Remote Hybrid
B2B

Plentific is hiring a Remote Senior Data Engineer

We're Plentific, the world’s leading real-time property solution, and we're looking for top talent to join our ambitious team. We’re a global company, headquartered in London, and operating across the United Kingdom, Germany and North America.

As a B2B company, we're dedicated to helping landlords, letting agents and property managers streamline operations, unlock revenue, increase tenant satisfaction, and remain compliant through our award-winning SaaS technology platform. We also work with SMEs and large service providers, helping them access more work and grow their businesses.

We're not just any proptech - we're backed by some of the biggest names in the business, including A/O PropTech, Highland Europe, Mubadala, RXR Digital Ventures and Target Global and work with some of the world’s most prominent real estate players.

But we're not just about business - we're also building stronger communities where people can thrive by ensuring the quality and safety of buildings, supporting decarbonisation through our ESG Retrofit Centre of Excellence and championing diversity across the sector through the Women’s Trade Network. We're committed to creating exceptional experiences for our team members, too. Our culture is open and empowering, and we're always looking for passionate, driven individuals to join us on our mission.

So, what's in it for you?

  • A fast-paced, friendly, collaborative and hybrid/flexible working environment
  • Ample opportunities for career growth and progression
  • A multicultural workplace with over 20 nationalities that value diversity, equity, and inclusion
  • Prioritisation of well-being with social events, digital learning, career development programs and much more

If you're ready to join a dynamic and innovative team that’s pioneering change in real estate, we'd love to hear from you.

The Role

We’re looking for a proactive and energetic individual with extensive experience in Data Engineering and Machine Learning to join our growing business. You’ll be working alongside highly technical and motivated teams and report to the Head of Data Engineering. You would be expected to contribute to the growth of the data/ML/AI products both internally and for our customers. You’ll be working on the cutting edge of technology and will thrive if you have a desire to learn and keep up to date with the latest trends in Data Infrastructure, Machine Learning and Generative AI. For people with the right mindset, this provides a very intellectually stimulating environment.

Responsibilities

  • Be one of the architects for our data model defined in dbt.
  • Take ownership and refine our existing real time data pipelines.
  • Create and maintain analytics dashboards that are defined as-code in Looker
  • Create and productize Machine Learning and LLM-based features
  • Be a mentor for the more junior data engineers in the team
  • Proficient in SQL and Python. A live coding interview is part of the hiring process.
  • Experience in data modelling with dbt
  • Experience organising the data governance across a company, including the matrix of access permissions for a data warehouse.
  • Experience with BI tools as code. Looker experience is a nice to have.
  • Experience building ETL/ELT data ingestion and transformation pipelines
  • Experience training Machine Learning Algorithms
  • Experience productizing Machine Learning from the infrastructure perspective (MLOps)
  • Nice to have: experience productizing multimodal (text, images, audio, video) GenAI products with frameworks such as LangChain

As you can see, we are quickly progressing with our ambitious plans and are eager to grow our team of doers to achieve our vision of managing over 2 million properties through our platform across various countries. You can help us shape the future of property management across the globe. Here’s what we offer:

  • A competitive compensation package
  • 25 days annual holiday
  • Flexible working environment including the option to work abroad
  • Private health care for you and immediate family members with discounted gym membership, optical, dental and private GP
  • Enhanced parental leave
  • Life insurance (4x salary)
  • Employee assistance program
  • Company volunteering day and charity salary sacrifice scheme
  • Learning management system powered by Udemy
  • Referral bonus and charity donation if someone you introduce joins the company
  • Season ticket loan, Cycle to work, Electric vehicle and Techscheme programs
  • Pension scheme
  • Work abroad scheme
  • Company-sponsored lunches, dinners and social gatherings
  • Fully stocked kitchen with drinks, snacks, fruit, breakfast cereal etc.

See more jobs at Plentific

Apply for this job

1d

Data Engineer

Blend36Edinburgh, United Kingdom, Remote
terraformsqlDesignazurepython

Blend36 is hiring a Remote Data Engineer

Job Description

Life as a Data Engineer at Blend

We are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. 

However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact.   

Our Data Engineer is someone who feels the most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated. 

Reporting to a Senior Data Engineer and working closely with the Data Science and Business Development teams, this role will be responsible for driving high delivery standards and innovation in the company. Typically, this involves delivering data solutions to support the provision of actionable insights for stakeholders. 

What can you expect from the role? 

  • Preparing and presenting data driven solutions to stakeholders.
  • Analyse and organise raw data.
  • Design, develop, deploy and maintain ingestion, transformation and storage solutions.
  • Use a variety of Data Engineering tools and methods to deliver.
  • Own projects end-to-end.
  • Contributing to solutions design and proposal submissions.
  • Supporting the development of the data engineering team within Blend.
  • Maintain in-depth knowledge of data ecosystems and trends. 
  • Mentor junior colleagues.

Qualifications

What you need to have? 

  • Proven track record of building analytical production pipelines using Python and SQL programming.
  • Working knowledge of large-scale data such as data warehouses and their best practices and principles in managing them.
  • Experience with development, test and production environments and knowledge and experience of using CI/CD.
  • ETL technical design, development and support.
  • Knowledge of Data Warehousing and database operation, management & design.

Nice to have 

  • Knowledge in container deployment.
  • Experience of creating ARM template design and production (or other IaC, e.g., CloudFormation, Terraform).
  • Experience in cloud infrastructure management.
  • Experience of Machine Learning deployment.
  • Experience in Azure tools and services such as Azure ADFv2, Azure Databricks, Storage, Azure SQL, Synapse and Azure IoT.
  • Experience of leveraging data out of SAP or S/4HANA.

See more jobs at Blend36

Apply for this job

1d

Staff Data Security Engineer

GeminiRemote (USA)
remote-firstDesignkuberneteslinuxpythonAWS

Gemini is hiring a Remote Staff Data Security Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Platform Security

The Role: Staff Data Security Engineer

The Platform Security team secures Gemini’s infrastructure through service hardening and by developing and supporting a suite of foundational tools. We provide secure-by-default infrastructure, consumable security services, and expert consultation to engineering teams for secure cloud and non-cloud infrastructure.

The Platform Security team covers a broad problem space that includes all areas of Gemini’s platform infrastructure. In the past, this team has focused specifically on cloud security and we continue to invest heavily in this area.  This role will bring additional depth and specialization in database design, security and  We also value expertise in neighboring areas of infrastructure and platform security engineering including: PKI, core cryptography, identity management, network security, etc.

Responsibilities:

  • Design, deploy, and maintain database, and relevant security controls for security and engineering teams.
  • Build and improve security controls capturing data in transit and data at rest. 
  • Partner with engineering teams on security architecture and implementation decisions.
  • Own our database security roadmap and act as relevant SME within Gemini.
  • Collaborate with AppSec, Threat Detection, Incident Response, GRC and similar security functions to identify, understand, and reduce security risk.

Minimum Qualifications:

  • 6+ years of experience in the field.
  • Extensive knowledge of database architecture and security principles.
  • Significant experience with container orchestration technologies and relevant security considerations. We often use Kubernetes and EKS.
  • Experience in SRE, systems engineering, or network engineering.
  • Experience with distributed systems or cloud computing. We often use AWS.
  • Significant software development experience. We often use Python or Go.
  • Experience building and owning high-availability critical systems or cloud-based services
  • Able to self-scope, define, and manage short and long term technical goals.

Preferred Qualifications:

  • Proven track record securing databases and ensuring data integrity.
  • Experience securing AWS and Linux environments, both native and third-party.
  • Experience designing and implementing cryptographic infrastructure such as PKI, secrets management, authentication, or secure data storage/transmission.
  • Experience designing and implementing systems for identity and access management.
  • Experience with configuration management and infrastructure as code. We often use Terraform.
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $172,000 - $215,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-AH1

Apply for this job

2d

Data Engineer

StatusRemote (Worldwide)
airflowsqldockerlinuxpython

Status is hiring a Remote Data Engineer

About Status

Status is building the tools and infrastructure for the advancement of a secure, private, and open web3. 

With the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.

As an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. 

As a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of Apps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol, the p2p communication layer for Web3.

As a team, Status has been completely distributed since inception. Our team is currently 200+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. 

We care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.

About the Infrastructure Team

We’re a team scattered across the world, working to provide various tools and services for the projects in the company. We work asynchronously, with a high level of independence.

We are seeking a Data Engineer to construct and maintain dynamic dashboards for our Open Source projects. The successful candidate will collect, analyze, and interpret data to provide actionable insights, enabling us to effectively track and improve our project progress.

 

Key responsibilities

  • Develop and implement data pipelines for Open Source Project development, communication campaigns, and finance overview
  • Build visualization tools to track and analyze project progress, communication effectiveness, and financial health
  • Accompany teams lead to identify key elements for KPI analysis
  • Manage Data Warehouse to maintain data quality

You ideally will have 

  • Experience with Data Pipeline implementation (DBT, Airflow, Airbyte)
  • Experience with SQL optimization
  • Experience with Python or other scripting languages
  • Experience with Grafana or other visualization tools
  • Experience in, and passion for, blockchain technology
  • A strong alignment to our principles: https://status.app/manifesto

 

Bonus points if 

  • Comfortable working remotely and asynchronously
  • Experience working for an open source organization.  
  • Experience with Linux, Docker
  • Experience with LLM fine-tuning

[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter]

 

Hiring Process 

  1. Interview with People Ops team
  2. Technical Task
  3. Interview with BI Team Lead
  4. Interview with Infra Team Lead

Compensation

We are happy to pay in any mix of fiat/crypto.

See more jobs at Status

Apply for this job

7d

Data Engineer

Zone ITSydney,New South Wales,Australia, Remote Hybrid

Zone IT is hiring a Remote Data Engineer

We are currently seeking a highly motivated and experienced Data Engineer to a full-time position. You will be responsible for designing and implementing data architectures, integrating data from various sources, and optimizing data pipelines to ensure efficient and accurate data processing.

Key responsibilities:

  • Design and implement data architectures, including databases and processing systems
  • Integrate data from various sources and ensure data quality and reliability
  • Optimize data pipelines for scalability and performance
  • Develop and maintain ETL processes and data transformation solutions
  • Apply data security measures and ensure compliance with data privacy regulations
  • Create and maintain documentation related to data systems design and maintenance
  • Collaborate with cross-functional teams to understand data requirements and provide effective data solutions

Key skills and qualifications:

  • Bachelor's degree or higher in Computer Science, Data Science, or a related field
  • Strong proficiency in SQL, Python, and/or Java
  • Experience with ETL processes and data integration
  • Working knowledge of data modeling and database design principles
  • Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a plus
  • Experience with cloud platforms such as AWS, Azure, or GCP is a plus
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities

About Us

Zone IT Solutions is Australia based Recruitment Company. We specialize in Digital, ERP and larger IT Services. We offer flexible, efficient and collaborative solutions to any organization that requires IT, experts. Our agile, agnostic and flexible solutions will help you source the IT Expertise you need. Our delivery Offices are in Melbourne, Sydney and India. If you are looking for new opportunities; please share your profile at Careers@zoneitsolutions.com or contact us at 0434189909

Also follow our LinkedIn page for new job opportunities and more.

Zone IT Solutions is an equal opportunity employer and our recruitment process focuses on essential skills and abilities. We welcome applicants from a diverse range of backgrounds, including Aboriginal and Torres Strait Islander peoples, people from culturally and linguistically diverse (CALD) backgrounds and people with disabilities.

See more jobs at Zone IT

Apply for this job

8d

Data Engineer

Tech9Remote
MLFull TimeDevOPSagileterraformsqlDesignazurepython

Tech9 is hiring a Remote Data Engineer

Data Engineer - Tech9 - Career Page: Work with skil

See more jobs at Tech9

Apply for this job

11d

Sr. Data Engineer - Remote

Trace3Remote
DevOPSagilenosqlsqlDesignazuregraphqlapijavac++c#pythonbackend

Trace3 is hiring a Remote Sr. Data Engineer - Remote


Who is Trace3?

Trace3 is a leading Transformative IT Authority, providing unique technology solutions and consulting services to our clients. Equipped with elite engineering and dynamic innovation, we empower IT executives and their organizations to achieve competitive advantage through a process of Integrate, Automate, Innovate.

Our culture at Trace3 embodies the spirit of a startup with the advantage of a scalable business. Employees can grow their career and have fun while doing it!

Trace3 is headquartered in Irvine, California. We employ more than 1,200 people all over the United States. Our major field office locations include Denver, Indianapolis, Grand Rapids, Lexington, Los Angeles, Louisville, Texas, San Francisco.  

Ready to discover the possibilities that live in technology?

 

Come Join Us!

Street-Smart Thriving in Dynamic Times

We are flexible and resilient in a fast-changing environment. We continuously innovate and drive constructive change while keeping a focus on the “big picture.” We exercise sound business judgment in making high-quality decisions in a timely and cost-effective manner. We are highly creative and can dig deep within ourselves to find positive solutions to different problems.

Juice - The “Stuff” it takes to be a Needle Mover

We get things done and drive results. We lead without a title, empowering others through a can-do attitude. We look forward to the goal, mentally mapping out every checkpoint on the pathway to success, and visualizing what the final destination looks and feels like.

Teamwork - Humble, Hungry and Smart

We are humble individuals who understand how our job impacts the company's mission. We treat others with respect, admit mistakes, give credit where it’s due and demonstrate transparency. We “bring the weather” by exhibiting positive leadership and solution-focused thinking. We hug people in their trials, struggles, and failures – not just their success. We appreciate the individuality of the people around us.


 

Who We’re Looking For:

We’re looking to add a Senior Data Integration Engineer with a strong background in data engineering and development.  You will work with a team of software and data engineers to build client-facing data-first solutions utilizing data technologies such as SQL Server and MongoDB. You will develop data pipelines to transform/wrangle/integrate the data into different data zones.

To be successful in this role, you will need to hold extensive knowledge of SQL, relational databases, ETL pipelines, and big data fundamentals.  You will also need to possess strong experience in the development and consumption of RESTful APIs.  The ideal candidate will also be a strong independent worker and learner.

 

What You’ll Be Doing

  • Develop processes and data models for consuming large quantities of 3rd party vendor data via RESTful APIs.
  • Develop data processing pipelines to analyze, transform, and migrate data between applications and systems.
  • Analyze data from multiple sources and negotiate differences in storage schema using the ETL process.
  • Develop APIs for external consumption by partners and customers.
  • Develop and support our ETL environment by recommending improvements, monitoring, and deploying quality and validation processes to ensure accuracy and integrity of data.
  • Design, develop, test, deploy, maintain, and improve data integration pipelines.
  • Create technical solutions that solve business problems and are well engineered, operable, and maintainable.
  • Design and implement tools to detect data anomalies (observability). Ensure that data is accurate, complete, and high quality across all platforms.
  • Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
  • Assembles large and complex data sets; develops data models based on specifications using structured data sets.
  • Develops familiarity with emerging and complex automations and technologies that support business processes.
  • Develops scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Engage in logical and physical design of databases, table creation, script creation, views, procedures, packages, and other database objects.
  • Create documentation for solutions and processes implemented or updated to ensure team members and stakeholders can correctly interpret it.
  • Design and implement processes and/or process improvements to help the development of technology solutions.

 

Your Skills and Experience (In Order of Importance):

  • 5+ years of relational database development experience; including SQL query generation and tuning, database design, and data concepts.
  • 5+ years of backend and Restful API development experience in NodeJS (experience with GraphQL a plus).
  • 5+ years of development experience with the following languages Python, Java, C#/ .NET.
  • 5+ years of experience with SQL and NoSQL databases; including MS SQL and MongoDB.
  • 5+ years consuming RESTful APIs with data ingestion and storage.
  • 5+ years developing RESTful APIs for use by customers and 3rd
  • 3+ years of professional work experience designing and implementing data pipelines in a cloud environment.
  • 3+ years of experience working within Azure cloud.
  • Experience in integrating and ingesting data from external data sources.
  • Strong diagnostic skills and ability to research, troubleshoot, and logically determine solutions.
  • Ability to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
  • Comfortable managing multiple and changing priorities, and meeting deadlines.
  • Highly organized, detail-oriented, excellent time management skills.
  • Excellent written and verbal communication skills.
Actual salary will be based on a variety of factors, including location, experience, skill set, performance, licensure and certification, and business needs. The range for this position in other geographic locations may differ. Certain positions may also be eligible for variable incentive compensation, such as bonuses or commissions, that is not included in the base salary.
Estimated Pay Range
$142,500$168,700 USD

The Perks:

  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Stocked kitchen with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off

 

***To all recruitment agencies: Trace3 does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Trace3 employees or any other company location. Trace3 is not responsible for any fees related to unsolicited resumes/CVs.

See more jobs at Trace3

Apply for this job

12d

Process Engineer

Tessenderlo GroupPhoenix, AZ, Remote
Designapi

Tessenderlo Group is hiring a Remote Process Engineer

Job Description

Are you an experienced Chemical Engineer passionate about process optimization and hands-on work? Do you thrive in environments where you're given the autonomy to lead, innovate, and solve complex problems? If so, we have an exciting opportunity for you!

As a Process Engineer III with Tessenderlo Kerley, Inc., you will be pivotal in troubleshooting, designing, and implementing different processes at multiple sites. You will collaborate closely with plant operations, HS&E, and project teams to achieve company production, quality control, and compliance goals. In addition, you will work with the Process Engineering Manager and other engineers to learn company tools and standard practices. Tessenderlo Kerley has multiple facilities in the U.S. and abroad, offering countless opportunities for professional growth and development.

The ideal candidate for this role will have a sharp eye for detail, strong organizational skills and the ability to balance multiple projects. You’ll alsoneed a solid technical background in chemical plant operations, an interest in analyzing process data, and the drive to find practical solutions for engineering challenges.

Key Responsibilities:

  • Chemical engineering– Understanding piping and instrumentation diagrams, mass and energy balances, chemical compatibility, and product quality controls.
  • Process Safety Management – Participation or leadership of PHA/HAZOPs, assisting with change management.
  • Design– P&ID redlines, equipment/instrument specifications, and calculations (line sizing, PSV sizing per API codes, rotating equipment sizing).
  • Project Execution– Scope of work development, gathering and review of vendor bids, and collaboration with other engineering disciplines.
  • Field Work:Provide technical support for troubleshooting, turnarounds and project commissioning efforts at 2-4 sites, with approximately 30-40% travel.

    Qualifications

    What We’re Looking For:

    • A Bachelor of Science degree in Chemical Engineering.
    • At least five years of hands-on process engineering experience, ideally with some exposure to Sulfur Recovery Units.
    • Strong, independent decision-making skills to drive projects with minimal oversight.
    • Technical skills such as P&ID design, equipment/instrument sizing and selection, review of procedures and operating manuals.
    • A knack for balancing multiple projects and sites while maintaining safety and productivity standards.
    • A motivated, safety-conscious individual who inspires others through professionalism and effective communication.

    What we can offer you:

    • Independence: You will have the freedom to make impactful decisions and optimize processes with minimal supervision.
    • Continuous Learning: You will participate in seminars and gain exposure to various subjects, processes and cutting-edge technology.
    • Diverse Experiences: With both office and fieldwork, you'll collaborate with cross-functional teams, travel to multiple sites (domestic and minimal international), and tackle unique challenges.
    • Flexibility: Tessenderlo Kerley values professional growth and allows engineers to explore their interests related to company projects and assignments.
    • Safety First: You will join a company with an outstanding safety record and where your well-being is a top priority.

    Physical Requirements:

    • Ability to lift 50 pounds, climb stairs and use a variety of safety equipment, including respirators and SCBAs.

    If you’re a problem solver, project executor, and passionate about pushing the boundaries of process engineering, this is the role for you!

    Join our team and take your career to the next level by applying your skills to real-world challenges in a dynamic and rewarding environment.

    See more jobs at Tessenderlo Group

    Apply for this job

    15d

    (Senior) Data Engineer - France (F/M/D)

    ShippeoParis, France, Remote
    MLairflowsqlRabbitMQdockerkubernetespython

    Shippeo is hiring a Remote (Senior) Data Engineer - France (F/M/D)

    Job Description

    The Data Intelligence Tribe is responsible for leveraging Shippeo’s data from our large shipper and carrier base, to build data products that help our users (shippers and carriers alike) and ML models to provide predictive insights. This tribe’s typical responsibilities are to:

    • get accurately alerted in advance of any potential delays on their multimodal flows or anomalies so that they can proactively anticipate any resulting disruptions

    • extract the data they need, get direct access to it or analyze it directly on the platform to gain actionable insights that can help them increase their operational performance and the quality and compliance of their tracking

    • provide best-in-class data quality by implementing advanced cleansing & enhancement rules

    As a Data Engineer at Shippeo, your objective is to ensure that data is available and exploitable by our Data Scientists and Analysts on our different data platforms. You will contribute to the construction and maintenance of Shippeo’s modern data stack that’s composed of different technology blocks:

    • Data Acquisition (Kafka, KafkaConnect, RabbitMQ),

    • Batch data transformation (Airflow, DBT),

    • Cloud Data Warehousing (Snowflake, BigQuery),

    • Stream/event data processing (Python, docker, Kubernetes) and all the underlying infrastructure that support these use cases.

     

    Qualifications

    Required:

    • You have a degree (MSc or equivalent) in Computer Science.

    • 3+ years of experience as a Data Engineer.

    • Experience building, maintaining, testing and optimizing data pipelines and architectures

    • Programming skills in Python 

    • Advanced working knowledge of SQL, experience working with relational databases and familiarity with a variety of databases.

    • Working knowledge of message queuing and stream processing.

    • Advanced knowledge of Docker and Kubernetes.

    • Advanced knowledge of a cloud platform (preferably GCP).

    • Advanced knowledge of a cloud based data warehouse solution (preferably Snowflake).

    • Experience with Infrastructure as code (Terraform/Terragrunt)

    • Experience building and evolving CI/CD pipelines (Github Actions).

    Desired: 

    • Experience with Kafka and KafkaConnect (Debezium).

    • Monitoring and alerting on Grafana / Prometheus.

    • Experience working on Apache Nifi.

    • Experience working with workflow management systems such as Airflow.

    See more jobs at Shippeo

    Apply for this job

    15d

    Data Engineer

    Phocas SoftwareChristchurch,Canterbury,New Zealand, Remote Hybrid
    sqlpostgresqlpython

    Phocas Software is hiring a Remote Data Engineer

    We're a business planning and analytics company on a mission to make people feel good about data. Since 2001, we’ve helped thousands of companies turn complex business data into performance boosting results. Despite our now global status of 300 world-class humans, we’ve held on to our start-up roots. The result is a workplace that’s fast, exciting and designed for fun.

    As long as you’re happy, the rest falls into place. Think less stress, higher performance, more energy and all-round nicer human. Your friends and family will be delighted.

    As the Internal Data Specialist, you'll ensure the business can leverage our internal data sources, allowing us to make better decisions, react faster to changes and build confidence in our data and decisions. Your work will be split between support and project deliverables working with the Phocas IT and Finance teams and the wider business.

    What will you be doing?

    • Supporting internal reporting systems and data transformation processes.
    • Implementing new dashboards, reports, data sources and improvements to existing data sources.
    • Creating scalable and robust pipelines to ingest data from multiple structured and unstructured sources (APIs, databases, flat files, etc.) into our data platform.
    • Generating answers to the business’ questions by working with our internal data assets.
    • Improving business understanding of our data including where it comes from, how it fits together, and the meaning of the data.

    What are we looking for?

    • A degree in data science/computer science or similar, and solid (5+ years) experience in similar roles, working with data analytics products (Phocas, Power BI, etc.). 
    • Strong database experience (SQL Server, PostgreSQL) and experience with scripting languages (Python).
    • A general understanding of finance basics: terms, systems, processes, and best practices. 
    • Strong experience designing, developing, and supporting complex data import and transformation processes.
    • Experience creating technical and non-technical documentation and user guides and a natural tendency to produce strong documentation (both comments within the code, and externally.
    • Proven critical thinking skills; able to proactively problem solve and develop out of the box solutions. 
    • A growth mindset: a willingness to embrace new challenges and opportunities to grow.
    • Someone who can develop strong relationships and work collaboratively and supportively with a diverse global team
    • Bonus points for experience building financial reporting solutions, working with third-party APIs to extract data in an automated manner, and/or experience working in internal customer facing support roles.

    Why work at Phocas? 

    • People – when we ask what people like about working here, 'the people’ is the single most common answer 
    • Social/fun stuff – opportunities to get together, sometimes (optional) silly games, & food. We all really like food. 
    • Our office – spacious, conveniently located in sunny Sydenham, plenty of parking for four-, two- or even single wheeled vehicles.  
    • Southern Cross, Life, TPD and Income Protection Insurance 
    • Extra paid parental leave 
    • Flexible/hybrid working policy  

    Phocas is an Accredited Employer and typically we are strong supporters of international talent, but due to current visa settings and processing times, we can only consider applicants with current NZ working rights.

    We are a 2024 Circle Back Initiative Employer – we commit to respond to every applicant.

    To all recruitment agencies: Phocas does not accept agency resumes. Please do not forward resumes to our jobs alias, Phocas employees or any other company location. Phocas will not be responsible for any fees related to unsolicited resumes.

    Phocas is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.

    #LI-NG1 #LI-HYBRID

    See more jobs at Phocas Software

    Apply for this job

    18d

    Senior Data Engineer

    BloomreachRemote CEE, Czechia, Slovakia
    redisremote-firstc++kubernetespython

    Bloomreach is hiring a Remote Senior Data Engineer

    Bloomreach is the world’s #1 Commerce Experience Cloud, empowering brands to deliver customer journeys so personalized, they feel like magic. It offers a suite of products that drive true personalization and digital commerce growth, including:

    • Discovery, offering AI-driven search and merchandising
    • Content, offering a headless CMS
    • Engagement, offering a leading CDP and marketing automation solutions

    Together, these solutions combine the power of unified customer and product data with the speed and scale of AI optimization, enabling revenue-driving digital commerce experiences that convert on any channel and every journey. Bloomreach serves over 850 global brands including Albertsons, Bosch, Puma, FC Bayern München, and Marks & Spencer. Bloomreach recently raised $175 million in a Series F funding round, bringing its total valuation to $2.2 billion. The investment was led by Goldman Sachs Asset Management with participation from Bain Capital Ventures and Sixth Street Growth. For more information, visit Bloomreach.com.

     

    We want you to join us as a full-timeSenior Data Engineer into our Data Pipelineteam. We work remotely first, but we are more than happy to meet you in our nice office in Bratislava or Brno. And if you are interested in who will be your engineering manager, check out Vaclav's Linkedin.

    Intrigued? Read on ????…

    Your responsibilities

    • You will develop and maintain Data LakeHouse on top of GCP platform using Apache IceBerg, BigQuery, BigLake tables, DataPlex and DataProc in the form ofApacheSpark/Flink with open file formats like AVRO and Parquet
    • You will help to maintain a streaming mechanism on how data from Apache Kafka gets into the Data LakeHouse
    • You will optimise the Data LakeHouse for near-real-time and non-real-time analytical use-cases primarily for customer activation and scenarios/campaign evaluation 
    • You should help with areas like data discovery and managed access to data through the data governance layer and data catalog using DataPlex so our engineering teams can leverage from this unified Data LakeHouse
    • You feel responsible for DataModeling and schema evolution
    • You should help us with adopting the concepts from Data Fabrics and Data Mesh to run data as a product to unlock the potential the data can unleash for our clients
    • You should bring expertise into the team from similar previous projects to influence how we adopt and evolve the concepts mentioned above and as an addition to that to topics like Zero-copy or reverse ETL to increase the ease of integration with client’s platforms
    • You will also help to maintain the existing data exports to Google’s BigQuery using google’s DataFlows and Apache Beam 
    • You will help us run and support our services in production handling high-volume traffic using Google Cloud Platform and Kubernetes.
    • You will review the code of your peers and they'll review yours. We have high code quality standards and the four-eyes principle is a must!

    Your qualifications

    • You have production experience with building and operating a DataLake, Data Warehouse or Data LakeHouses
    • You have a taste for big data streaming, storage and processing using open source technologies
    • You can demonstrate your understanding of what it means to treat data as a product
    • You know what are Data Mashes and Data Fabrics and what is critical to make sure for them to bring value
    • You are able to learn and adapt. It'll be handy while exploring new tech, navigating our not-so-small code base, or when iterating on our team processes.
    • You knowdata structures,you knowPython and (optionaly) Go.

    Our tech stack

    • Google Cloud Platform, DataFlow, Apache Beam, BigQuery, BigLake Table
    • Open formats IceBerg, Avro, Parquet
    • DataProc, Spark, Flink, Presto
    • Python, GO
    • Apache Kafka, Kubernetes, GitLab
    • BigTable, Mongo, Redis
    • … and much more ????

    Compensations

    • Salary range starting from 4300 EUR gross per month,going up depending on your experience and skills
    • There's a bonus based on company performance and your salary.
    • You will be entitled to restricted stock options ????that will truly make you a part of Bloomreach.
    • You can spend 1500 USD per year on the education of your choice (books, conferences, courses, ...).
    • You can count on free access to Udemy courses.
    • We have 4 company-wide disconnect days throughout the year during which you will be encouraged not to work and spend a day with your friends and family "disconnected".
    • You will have extra 5 days of paid vacation????. Extra days off for extra work-life balance ????.
    • Food allowance!
    • Sweet referral bonus up to 3000 USD based on the position.

    Your success story.

    • During the first 30 days, you will get to know the team, the company, and the most important processes. You’ll work on yourfirst tasks. We will help you to get familiar with our codebase and our product.
    • During the first 90 days, you will participate in yourfirst, more complex projects. You will help the team to find solutions to various problems, break the solution down into smaller tasks and participate in implementation. You will learn how we identify problems, how we prioritize our efforts, and how we deliver value to our customers.
    • During the first 180 days, you’ll become an integral part of the team. You will achieve the first goals we will set together to help you grow and explore new and interesting things. You will help us to deliver multi-milestone projects bringing great value to our customers. You will help us mitigate your first incidents and eventually even join the on-call rotation. You will get a sense of where the team is heading and you’ll help us toshape our future.
    • Finally, you’ll find out that our values are truly lived by us ????. We are dreamers and builders. Join us!

     

    More things you'll like about Bloomreach:

    Culture:

    • A great deal of freedom and trust. At Bloomreach we don’t clock in and out, and we have neither corporate rules nor long approval processes. This freedom goes hand in hand with responsibility. We are interested in results from day one. 

    • We have defined our5 valuesand the 10 underlying key behaviors that we strongly believe in. We can only succeed if everyone lives these behaviors day to day. We've embedded them in our processes like recruitment, onboarding, feedback, personal development, performance review and internal communication. 

    • We believe in flexible working hours to accommodate your working style.

    • We work remote-first with several Bloomreach Hubs available across three continents.

    • We organize company events to experience the global spirit of the company and get excited about what's ahead.

    • We encourage and support our employees to engage in volunteering activities - every Bloomreacher can take 5 paid days off to volunteer*.
    • TheBloomreach Glassdoor pageelaborates on our stellar 4.6/5 rating. The Bloomreach Comparably page Culture score is even higher at 4.9/5

    Personal Development:

    • We have a People Development Program -- participating in personal development workshops on various topics run by experts from inside the company. We are continuously developing & updating competency maps for select functions.

    • Our resident communication coachIvo Večeřais available to help navigate work-related communications & decision-making challenges.*
    • Our managers are strongly encouraged to participate in the Leader Development Program to develop in the areas we consider essential for any leader. The program includes regular comprehensive feedback, consultations with a coach and follow-up check-ins.

    • Bloomreachers utilize the $1,500 professional education budget on an annual basis to purchase education products (books, courses, certifications, etc.)*

    Well-being:

    • The Employee Assistance Program -- with counselors -- is available for non-work-related challenges.*

    • Subscription to Calm - sleep and meditation app.*

    • We organize ‘DisConnect’ days where Bloomreachers globally enjoy one additional day off each quarter, allowing us to unwind together and focus on activities away from the screen with our loved ones.

    • We facilitate sports, yoga, and meditation opportunities for each other.

    • Extended parental leave up to 26 calendar weeks for Primary Caregivers.*

    Compensation:

    • Restricted Stock Units or Stock Options are granted depending on a team member’s role, seniority, and location.*

    • Everyone gets to participate in the company's success through the company performance bonus.*

    • We offer an employee referral bonus of up to $3,000 paid out immediately after the new hire starts.

    • We reward & celebrate work anniversaries -- Bloomversaries!*

    (*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)

    Excited? Join us and transform the future of commerce experiences!

    If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!


    Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.

     #LI-Remote

    See more jobs at Bloomreach

    Apply for this job

    19d

    Data Engineer II - (Remote - US)

    MediavineAtlanta,Georgia,United States, Remote
    sqlDesignpythonAWS

    Mediavine is hiring a Remote Data Engineer II - (Remote - US)

    Mediavine is seeking an experienced Data Engineer to join our engineering team. We are looking for someone who enjoys solving interesting problems and wants to work with a small team of talented engineers on a product used by thousands of publishers. Applicants must be based in the United States.

    About Mediavine

    Mediavine is a fast-growing advertising management company representing over 10,000 websites in the food, lifestyle, DIY, and entertainment space. Founded by content creators, for content creators, Mediavine is a Top 20 Comscore property, exclusively reaching over 125 million monthly unique visitors. With best-in-class technology and a commitment to traffic quality and brand safety, we ensure optimal performance for our creators.

    Mission & Culture

    We are striving to build an inclusive and diverse team of highly talented individuals that reflect the industries we serve and the world we live in. The unique experiences and perspectives of our team members is encouraged and valued. If you are talented, driven, enjoy the pace of a start-up like environment, let’s talk!

    Position Title & Overview:

    The Data & Analytics team consists of data analysts, data engineers and analytics engineers working to build the most effective platform and tools to help uncover opportunities and make decisions with data here at Mediavine. We partner with Product, Support, Ad Operations and other teams within the Engineering department to understand behavior, develop accurate predictors and build solutions that provide the best internal and external experience possible.

    A Data Engineer at Mediavine will help build and maintain our data infrastructure. Building scalable data pipelines, managing transformation processes, and ensuring data quality and security at all steps along the way. This will include writing and maintaining code in Python and SQL, developing on AWS, and selecting and using third-party tools like Rundeck, Metabase, and others to round out the environment. You will be involved in decisions around tool selection and coding standards.

     Our current data engineering toolkit consists of custom Python data pipelines, AWS infrastructure including Kinesis pipelines, Rundeck scheduling, dbt for transformation and Snowflake as our data warehouse platform. We are open to new tools and expect this position to be a part of deciding the direction we take. 

    Essential Responsibilities:

    • Create data pipelines that make data available for analytic and application use cases
    • Develop self-healing, resilient processes that do not require constant care and feeding to run smoothly
    • Create meaningful data quality notifications with clear actions for interested parties including other internal teams and other members of the data and analytics team
    • Leading projects from a technical standpoint, creating project Technical Design Documents
    •  Support data analysts and analytics engineers ability to meet the needs of the organization
    • Participate in code reviews, understanding coding standards, ensuring test coverage and being aware of best practices
    • Build or implement tooling around data quality, governance and lineage, in the dbt framework and Snowflake but external to that as needed
    • Provide next level support when data issues are discovered and communicated by the data analysts
    • Work with data analysts and analytics engineers to standardize transformation logic in the dbt layer for consistency and ease of exploration by end users
    • Enable analytics engineers and data analysts by providing data modeling guidance, query optimization and aggregation advice

    Location: 

    • Applicants must be based in the United States

    You Have: 

    • 3+ years of experience in a data engineering role
    • Strong Python skills (Understands tradeoffs, optimization, etc)
    • Strong SQL skills (CTEs, window functions, optimization)
    • Experience working in cloud environments (AWS preferred, GCS, Azure)
    • An understanding of how to best structure data to enable internal and external facing analytics
    • Familiarity with calling APIs to retrieve data (Authentication flows, filters, limits, pagination)
    • Experience working with DevOps to deploy, scale and monitor data infrastructure
    • Scheduler experience either traditional or DAG based
    • Comfortable working with multi-TB cloud data warehouses (Snowflake preferred, Redshift, Big Query)
    • Experience with other DBMS systems (Postgres in particular)

    Nice to haves:

    • Experience with web analysis such as creating data structure that support product funnels, user behavior, and decision path analysis 
    • Understanding of Snowflake external stages, file formats and snowpipe
    • Experience with orchestration tools particularly across different technologies and stacks
    • Experience with dbt
    • Knowledge of Ad Tech, Google Ad Manager and all of it’s fun quirks (so fun)
    • The ability to make your teammates laugh (it wouldn’t hurt if you were fun to work with is what I’m saying)
    • Familiarity with event tracking systems (NewRelic, Snowplow, etc)
    • Experience with one or more major BI tools (Domo, Looker, PowerBI, etc.)
    • 100% remote 
    • Comprehensive benefits including Health, Dental, Vision and 401k match
    • Generous paid time off 
    • Wellness and Home Office Perks 
    • Up to 12 weeks of paid Parental Leave 
    • Inclusive Family Forming Benefits 
    • Professional development opportunities 
    • Travel opportunities for teams, our annual All Hands retreat as well as industry events

    Mediavine provides equal employment opportunities to applicants and employees. All aspects of employment will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

    We strongly encourage minorities and individuals from underrepresented groups in technology to apply for this position.

    At Mediavine, base salary is one part of our competitive total compensation and benefits package and is determined using a salary range.  Individual compensation varies based on job-related factors, including business needs, experience, level of responsibility and qualifications. The base salary range for this role at the time of posting is $115,000 - $130,000 USD/yr.

    See more jobs at Mediavine

    Apply for this job

    19d

    Data Engineer

    golangMaster’s DegreetableauterraformscalasqlDesignazureapijavac++kubernetespythonAWS

    Cloudflare is hiring a Remote Data Engineer

    About Us

    At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

    We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

     

    Locations - Austin Highly referred, must be willing to relocate. 


    About the team  

    The Business Intelligence team at Cloudflare is responsible for building a centralized cloud data lake and an analytics platform that enables our internal Business Partners and Machine Learning teams with actionable insights and also provides a 360 view of our business. Our goal is to democratize data, support Cloudflare’s critical business needs, provide reporting and analytics via self-service tools to fuel existing and new business critical initiatives.

    About the role

    We are looking for an experienced Data Engineer to join our Business Intelligence Data Science team. The role involves designing, building, and maintaining data pipelines and infrastructure to support data science and machine learning initiatives. You will work closely with data scientists and other stakeholders to ensure data accessibility and quality, enabling the creation and operation of machine learning models and analytics solutions. You will help support our data science team to build solutions using Large Language Models, AI services, and machine learning solutions following industry AI standards and best practices. 

    What you will do

    • Collaborate with data scientists and business stakeholders to design datasets and engineer features for advanced analytical models.
    • Develop, test, and manage robust and scalable data pipelines and systems.
    • Collect, explore, validate, and prepare data for comprehensive analysis.
    • Monitor, optimize, and support development, testing, and production environments to maintain efficiency.
    • Ensure adherence to data governance and compliance standards and processes.
    • Design application components and evolve architecture, including APIs, services, data access, and integration.
    • Implement automation tools and frameworks, including CI/CD pipelines, to streamline processes.
    • Develop tools to automate workload monitoring and take proactive measures to scale the platform or resolve issues.
    • Mentor and guide junior data engineers, fostering a culture of continuous learning and development.

     

    Desired skills, knowledge, and experience 

    • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology or related field
    • 3+ years of experience designing data solutions, modeling data, and developing ETL/ELT pipelines at large scale.
    • 3+ years of experience in a programming language (e.g., Python, Java, Scala, Golang).
    • Experience in developing large-scale data solutions using cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Apache Spark).
    • Experience in data cataloging, classification, data quality, metadata management, and data lifecycle.
    • Experience with SQL and data visualization/analytics tools (e.g., Tableau, Looker Studio).
    • Experience with CI/CD pipelines and source control.
    • Experience with Infrastructure as Code tools like Terraform
    • Proficiency in API design and development of RESTful web services or GraphQL.
    • Working knowledge of containerization technologies like Kubernetes and Docker.
    • Experience with machine learning techniques and tools is a plus

    What Makes Cloudflare Special?

    We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

    Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

    Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

    1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

    Sound like something you’d like to be a part of? We’d love to hear from you!

    This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

    Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

    Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

    See more jobs at Cloudflare

    Apply for this job

    21d

    Data and Analytics Engineer

    airflowsqlDesignpython

    Cloudflare is hiring a Remote Data and Analytics Engineer

    About Us

    At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

    We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

    Available Locations: Lisbon or Remote Portugal

    About the team

    You will be part of the Network Strategy team within Cloudflare’s Infrastructure Engineering department. The Network Strategy team focuses on building both external and internal relationships that allow for Cloudflare to scale and reach user populations around the world. Our group takes a long term and technical approach to forging mutually beneficial and sustainable relationships with all of our network partners. 

    About the role

    We are looking for an experienced Data and Analytics Engineer to join our team to scale our data insights initiatives. You will work with a wide array of data sources about network traffic, performance, and cost. You’ll be responsible for building data pipelines, doing ad-hoc analytics based on the data, and automating our analysis. Important projects include understanding the resource consumption and cost of Cloudflare’s broad product portfolio.

    A candidate will be successful in this role if they’re flexible and able to match the right solution to the right problem. Flexibility is key. Cloudflare is a fast-paced environment and requirements change frequently.

    What you'll do

    • Design and implement data pipelines that take unprocessed data and make it usable for advanced analytics
    • Work closely with other product and engineering teams to ensure our products and services collect the right data for our analytics
    • Work closely with a cross functional team of data scientists and analysts and internal stakeholders on strategic initiatives 
    • Build tooling, automation, and visualizations around our analytics for consumption by other Cloudflare teams

    Examples of desirable skills, knowledge and experience

    • Excellent Python and SQL (one of the interviews will be a code review)
    • B.S. or M.S in Computer Science, Statistics, Mathematics, or other quantitative fields, or equivalent experience
    • Minimum 3 years of industry experience in software engineering, data engineering, data science or related field with a track record of extracting, transforming and loading large datasets 
    • Knowledge of data management fundamentals and data storage/computing principles
    • Excellent communication & problem solving skills 
    • Ability to collaborate with cross functional teams and work through ambiguous business requirements

    Bonus Points

    • Familiarity with Airflow 
    • Familiarity with Google Cloud Platform or other analytics databases

    What Makes Cloudflare Special?

    We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

    Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

    Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

    1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

    Sound like something you’d like to be a part of? We’d love to hear from you!

    This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

    Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

    Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

    See more jobs at Cloudflare

    Apply for this job

    22d

    (Senior) Data Engineer (F/M/D)

    ShippeoParis, France, Remote
    MLairflowsqlRabbitMQdockerkubernetespython

    Shippeo is hiring a Remote (Senior) Data Engineer (F/M/D)

    Job Description

    The Data Intelligence Tribe is responsible for leveraging Shippeo’s data from our large shipper and carrier base, to build data products that help our users (shippers and carriers alike) and ML models to provide predictive insights. This tribe’s typical responsibilities are to:

    • get accurately alerted in advance of any potential delays on their multimodal flows or anomalies so that they can proactively anticipate any resulting disruptions

    • extract the data they need, get direct access to it or analyze it directly on the platform to gain actionable insights that can help them increase their operational performance and the quality and compliance of their tracking

    • provide best-in-class data quality by implementing advanced cleansing & enhancement rules

    As a Data Engineer at Shippeo, your objective is to ensure that data is available and exploitable by our Data Scientists and Analysts on our different data platforms. You will contribute to the construction and maintenance of Shippeo’s modern data stack that’s composed of different technology blocks:

    • Data Acquisition (Kafka, KafkaConnect, RabbitMQ),

    • Batch data transformation (Airflow, DBT),

    • Cloud Data Warehousing (Snowflake, BigQuery),

    • Stream/event data processing (Python, docker, Kubernetes) and all the underlying infrastructure that support these use cases.

     

    Qualifications

    Required:

    • You have a degree (MSc or equivalent) in Computer Science.

    • 3+ years of experience as a Data Engineer.

    • Experience building, maintaining, testing and optimizing data pipelines and architectures

    • Programming skills in Python 

    • Advanced working knowledge of SQL, experience working with relational databases and familiarity with a variety of databases.

    • Working knowledge of message queuing and stream processing.

    • Advanced knowledge of Docker and Kubernetes.

    • Advanced knowledge of a cloud platform (preferably GCP).

    • Advanced knowledge of a cloud based data warehouse solution (preferably Snowflake).

    • Experience with Infrastructure as code (Terraform/Terragrunt)

    • Experience building and evolving CI/CD pipelines (Github Actions).

    Desired: 

    • Experience with Kafka and KafkaConnect (Debezium).

    • Monitoring and alerting on Grafana / Prometheus.

    • Experience working on Apache Nifi.

    • Experience working with workflow management systems such as Airflow.

    See more jobs at Shippeo

    Apply for this job

    24d

    Azure Data Engineer

    ProArchHyderabad,Telangana,India, Remote
    Designazure

    ProArch is hiring a Remote Azure Data Engineer

    ProArch is hiring a skilled Azure Data Engineer to join our team. As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data processing systems using Azure technologies. Additionally, you will collaborate with cross-functional teams to understand business requirements, identify opportunities for data-driven improvements, and deliver high-quality solutions. If you have a strong background in Azure data tools and technologies, excellent problem-solving skills, and a passion for data engineering, we want to hear from you!

    Responsibilities:

    • Design, develop, and implement data engineering solutions on the Azure platform
    • Create and maintain data pipelines and ETL processes
    • Optimize data storage and retrieval for performance and scalability
    • Collaborate with data scientists and analysts to build data models and enable data-driven insights
    • Ensure data quality and integrity through data validation and cleansing
    • Monitor and troubleshoot data pipelines and resolve any issues
    • Stay up-to-date with the latest Azure data engineering best practices and technologies
    • Excellent communication skills
    • Strong experience in Python/Pyspark
    • The ability to understand businesses concepts and work with customers to process data accurately.
    • A solid of understanding Azure Data Lake, Spark for Synapse (or Azure Databricks), Synapse Pipelines (or Azure Data Factory), Mapping Data Flows, SQL Server, Synapse Serverless/Pools (or SQL Data Warehouse).
    • Experience with source control, version control and moving data artifacts from Dev to Test to Prod.
    • A proactive self-starter, who likes deliver value, solves challenges and make progress.
    • Comfortable working in a team or as an individual contributor
    • Good data modelling skills (e.g., relationships, entities, facts, and dimensions)

    See more jobs at ProArch

    Apply for this job

    28d

    Senior Data Engineer

    CLEAR - CorporateNew York, New York, United States (Hybrid)
    tableauairflowsqlDesignjenkinspythonAWS

    CLEAR - Corporate is hiring a Remote Senior Data Engineer

    Today, CLEAR is well-known as a leader in digital and biometric identification, reducing friction for our members wherever an ID check is needed. We’re looking for an experienced Senior Data Engineer to help us build the next generation of products which will go beyond just ID and enable our members to leverage the power of a networked digital identity. As a Senior Data Engineer at CLEAR, you will participate in the design, implementation, testing, and deployment of applications to build and enhance our platform- one that interconnects dozens of attributes and qualifications while keeping member privacy and security at the core. 


    A brief highlight of our tech stack:

    • SQL / Python / Looker / Snowflake / Airflow / Databricks / Spark / dbt

    What you'll do:

    • Build a scalable data system in which Analysts and Engineers can self-service changes in an automated, tested, secure, and high-quality manner 
    • Build processes supporting data transformation, data structures, metadata, dependency and workload management
    • Develop and maintain data pipelines to collect, clean, and transform data. Owning end to end data product from ingestion to visualization
    • Develop and implement data analytics models
    • Partner with product and other stakeholders to uncover requirements, to innovate, and to solve complex problems
    • Have a strong sense of ownership, responsible for architectural decision-making and striving for continuous improvement in technology and processes at CLEAR

     What you're great at:

    • 6+ years of data engineering experience
    • Working with cloud-based application development, and be fluent in at least a few of: 
      • Cloud services providers like AWS
      • Data pipeline orchestration tools like Airflow, Dagster, Luigi, etc
      • Big data tools like Spark, Kafka, Snowflake, Databricks, etc
      • Collaboration, integration, and deployment tools like Github, Argo, and Jenkins 
      • Data visualization tool like Looker, Tableau, etc
    • Articulating technical concepts to a mixed audience of technical and non-technical stakeholders
    • Collaborating and mentoring less experienced members of the team
    • Comfort with ambiguity 
    • Curiosity about technology, believe in constant learning, and ability to be autonomous to figure out what's important

    How You'll be Rewarded:

    At CLEAR we help YOU move forward - because when you’re at your best, we’re at our best. You’ll work with talented team members who are motivated by our mission of making experiences safer and easier. Our hybrid work environment provides flexibility. In our offices, you’ll enjoy benefits like meals and snacks. We invest in your well-being and learning & development with our stipend and reimbursement programs. 

    We offer holistic total rewards, including comprehensive healthcare plans, family building benefits (fertility and adoption/surrogacy support), flexible time off, free OneMedical memberships for you and your dependents, and a 401(k) retirement plan with employer match. The base salary range for this role is $175,000 - $215,000, depending on levels of skills and experience.

    The base salary range represents the low and high end of CLEAR’s salary range for this position. Salaries will vary depending on various factors which include, but are not limited to location, education, skills, experience and performance. The range listed is just one component of CLEAR’s total compensation package for employees and other rewards may include annual bonuses, commission, Restricted Stock Units.

    About CLEAR

    Have you ever had that green-light feeling? When you hit every green light and the day just feels like magic. CLEAR's mission is to create frictionless experiences where every day has that feeling. With more than 25+ million passionate members and hundreds of partners around the world, CLEAR’s identity platform is transforming the way people live, work, and travel. Whether it’s at the airport, stadium, or right on your phone, CLEAR connects you to the things that make you, you - unlocking easier, more secure, and more seamless experiences - making them all feel like magic.

    CLEAR provides reasonable accommodation to qualified individuals with disabilities or protected needs. Please let us know if you require a reasonable accommodation to apply for a job or perform your job. Examples of reasonable accommodation include, but are not limited to, time off, extra breaks, making a change to the application process or work procedures, policy exceptions, providing documents in an alternative format, live captioning or using a sign language interpreter, or using specialized equipment.

    See more jobs at CLEAR - Corporate

    Apply for this job

    +30d

    Data Engineer with Databricks (Remote)

    Loginsoft Consulting LLCRichardson, TX - Remote
    10 years of experiencesqlDesignazurepythonAWS

    Loginsoft Consulting LLC is hiring a Remote Data Engineer with Databricks (Remote)

    NOTE: THIS POSITION IS TO JOIN AS W2 ONLY.

    Data Engineer with Databricks

    Location: Remote

    Duration: 6+ Months

    Daily Responsibilities:

    • Design, develop, test, deploy, maintain, and improve software applications and services.
    • Implement data pipelines and ETL processes using Databricks and Snowflake.
    • Collaborate with other engineers to understand requirements and translate them into technical solutions.
    • Optimize and fine-tune performance of data pipelines and database queries.
    • Ensure code quality through code reviews, unit testing, and continuous integration.
    • Contribute to architecture and technical design discussions.

    Technology requirements:

    • SQL
    • GCP
    • AWS

    Degree or certifications required:

    • No degrees required

    Years experience:

    • 10 years of experience as a Software Engineer or related role.

    Required background/ Skillsets:

    • Prior working experience in Data bricks
    • Snowflake
    • Python
    • Data science background

    Proficiency in Python for software development and scripting.
    Hands-on experience with Databricks and Snowflake for data engineering and analytics.
    Strong understanding of database design, SQL, and data modeling principles.
    Experience with cloud platforms such as AWS, Azure, or GCP.
    Familiarity with machine learning concepts and frameworks is a plus.
    Excellent problem-solving skills and ability to work independently and as part of a team.
    Strong communication skills and ability to collaborate effectively across teams.

    See more jobs at Loginsoft Consulting LLC

    Apply for this job

    +30d

    Snowflake Data Engineer

    OnebridgeIndianapolis, IN - Remote - Hybrid
    sqlDesigngit

    Onebridge is hiring a Remote Snowflake Data Engineer

    Onebridge is a Consulting firm with an HQ in Indianapolis, and clients dispersed throughout the United States and beyond. We have an exciting opportunity for a highly skilled Snowflake Data Engineer to join an innovative and dynamic group of professionals at a company rated among the top “Best Places to Work” in Indianapolis since 2015.

    Snowflake Data Engineer | About You

    As a Snowflake Data Engineer, you are responsible for defining data requirements, developing technical specifications, and architecting scalable and efficient data pipelines. You have a strong background in cloud-based data platforms and services, along with proven leadership skills to manage a team of data engineers. You will optimize ETL architectures and ensure adherence to best practices, security, and coding guidelines. You will also work closely with cross-functional teams, offering strategic insights and reporting project status, risks, and issues.

    Snowflake Data Engineer | Day-to-Day

    • Lead a team of data engineers in the design, development, and implementation of cloud-based data solutions using Snowflake, Fivetran, and Azure services.
    • Collaborate with cross-functional teams to define data requirements, develop technical specifications, and architect scalable, efficient data pipelines.
    • Design and implement data models, ETL processes, and data integration solutions to support business objectives and ensure data quality and integrity.
    • Optimize data architecture for performance, scalability, and cost-effectiveness, leveraging cloud-native technologies and best practices.
    • Provide technical leadership and mentorship, guiding team members in the adoption of best practices, tools, and methodologies.

    Snowflake Data Engineer| Skills & Experience

    • 8+ years of experience as a Data Engineer with a focus on cloud-based data platforms and services such as AWS, Azure, or GCP.
    • Extensive hands-on experience designing and implementing data solutions using Snowflake, Fivetran, and Azure cloud environments.
    • Strong proficiency in SQL and Python, with advanced knowledge of data modelling techniques, dimensional modelling, and data warehousing concepts.
    • In-depth understanding of data governance, security, and compliance frameworks, with experience in implementing security controls and encryption in cloud environments.
    • Excellent leadership and communication skills, with the ability to lead cross-functional teams, communicate technical strategies, and achieve goals in a fast-paced environment.

      A Best Place to Work in Indiana, since 2015.

      See more jobs at Onebridge

      Apply for this job

      +30d

      Principal Data Engineer

      BrightcoveUS - Remote
      airflowDesignapijavac++kubernetespythonAWS

      Brightcove is hiring a Remote Principal Data Engineer

      Role Overview:

      We are seeking an experienced and highly skilled Principal Data Engineer to join our dynamic team. In this role, you will play a pivotal role in the data modernization and will be responsible for designing, developing, and maintaining scalable data infrastructure and pipelines that support our organization's data needs. You will leverage your expertise in Java, Python, Snowflake, GCP, AWS, APIs, batch processing, DBT, Kubernetes, CI/CD tools, monitoring/alerting, and data governance to ensure robust and efficient data solutions. Additionally, you will play a crucial role in coaching and mentoring junior engineers to foster their growth and development.

      Key Responsibilities:

      • Architect and Design Data Systems: Lead the design and implementation of scalable data architectures and pipelines using Snowflake, GCP, AWS, and other technologies. Ensure data systems are efficient, reliable, and meet organizational needs. Develop and optimize data warehouse design and architecture to enhance performance and scalability.
      • Develop and Maintain Data Pipelines: Build and optimize data pipelines and ETL processes using Python, Java, and DBT. Handle batch processing and integrate with APIs as needed to facilitate data flow.
      • Data Infrastructure Management: Oversee the management and optimization of data infrastructure components, including cloud platforms (GCP, AWS) and container orchestration tools (Kubernetes).
      • CI/CD Integration: Implement and manage continuous integration and continuous deployment (CI/CD) processes for data engineering workflows using relevant tools and technologies.
      • Monitoring and Alerting: Set up and manage monitoring and alerting systems to ensure data pipelines and infrastructure are operating smoothly. Troubleshoot and resolve issues as they arise.
      • Data Governance: Establish and enforce data governance practices to ensure data quality, security, and compliance. Develop policies and procedures for data stewardship, data privacy, and data lifecycle management.
      • API Development: Build and integrate APIs to facilitate data exchange and ensure seamless connectivity between different systems and platforms.
      • Coaching and Collaboration:Provide guidance and mentorship to junior data engineers and team members. Foster a collaborative environment that encourages learning and professional development. Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and deliver solutions that meet their needs.
      • Documentation:  Maintain comprehensive documentation for data pipelines, architecture designs, and processes. Ensure documentation is up-to-date and accessible to team members. Keep up with industry trends, emerging technologies, and best practices to ensure the data engineering team remains at the forefront of technology.

      Experience and Qualifications:

      • 12+ years of experience in data engineering or a related field.
      • Proven track record of designing and implementing large-scale data systems and pipelines.
      • Extensive experience with Snowflake, GCP, AWS, Kafka and Kubernetes.
      • Strong proficiency in Java and Python.
      • Hands-on experience with batch processing and data transformation using Airflow and DBT.
      • Proven experience in building and integrating APIs.
      • Fluency in data warehouse design and optimization techniques.
      • Expertise in data architecture and system design.
      • Proficiency in using CI/CD tools for data workflows.
      • Strong understanding of data governance practices and data quality management.
      • Advanced skills in data warehouse design, performance tuning, and optimization.
      • Strong analytical and problem-solving skills.
      • Excellent communication and interpersonal skills.
      • Ability to coach and mentor team members effectively.

      Preferred Qualifications:

      • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degree is a plus.
      • Certification in relevant technologies (e.g., AWS Certified Data Analytics, Google Professional Data Engineer).
      • Experience with advanced monitoring and alerting tools.
      • Familiarity with data governance frameworks and compliance standards (e.g., GDPR, CCPA).

      About Brightcove 

      Brightcove is a diverse, global team of smart, passionate people who are revolutionizing the way organizations deliver video. We’re hyped up about storytelling, and about helping organizations reach their audiences in bold and innovative ways. When video is done right, it can have a powerful and lasting effect. Hearts open. Minds change. 

      Since 2004, Brightcove has been supporting customers that are some of the largest media companies, enterprises, events, and non-profit organizations in the world. There are over 600 Brightcovers globally, each of us representing our unique talents and we have built a culture that values authenticity, individual empowerment, excellence and collaboration. This culture enables us to harness the incredible power of video and create an environment where you will want to grow, stay and thrive. Bottom line: We take our video seriously, and we take great pride in doing it as #oneteam.

      WORKING AT BRIGHTCOVE 

      We strive to provide our employees with an environment where they can do their best work and be their best selves. This includes a focus on our employees’ work experience, actively creating a culture where inclusion and growth are at the center, and hiring, recognizing, promoting employees who are committed to living and breathing these same ideals. We value collaboration, creativity, work/life balance, professional growth and creating an empowering space for open communication. Whether you’re in one of our offices around the world or working remotely you have plenty of opportunities to meet colleagues andcelebrate a variety of personal interests with organized groups and clubs including an Employee Action Committee, Women of Brightcove, Pride of Brightcove, Parents of Brightcove … and more to come!

      We recognize that no candidate is perfect and Brightcove would love to have the chance to get to know you. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. Brightcove embraces diversity and seeks candidates who support persons of all identities and backgrounds. We strongly encourage individuals from underrepresented and/or marginalized identities to apply. If you need any accommodations for your interview, please email recruiting@brightcove.com

      The Brightcove Privacy Policy explains the processing and purposes of any personal information.

       

       

      At Brightcove, we believe that providing comprehensive and competitive compensation and benefits packages across the globe are essential to our employees. Base salary is just one component of Brightcove’s total rewards program. We offer a wide range of benefits and perks that may include bonus or commission, Brightcove stock, unlimited paid time off, 401(K) matching, health insurance (medical, dental, and vision), generous employer Health Savings Account (HSA) contributions, tuition reimbursement, 100% paid parental leave and more.

      USA Brightcove Base Salary Range
      $169,200$253,800 USD

      See more jobs at Brightcove

      Apply for this job