airflow Remote Jobs

89 Results

3h

Principal Data Engineer

Procore TechnologiesBangalore, India, Remote
scalanosqlairflowDesignazureUXjavadockerpostgresqlkubernetesjenkinspythonAWS

Procore Technologies is hiring a Remote Principal Data Engineer

Job Description

We’re looking for a Principal Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Principal Data Engineer, you’ll use your expert-level technical skills to craft innovative solutions while influencing and mentoring other senior technical leaders. To be successful in this role, you’re passionate about distributed systems, including caching, streaming, and indexing technologies on the cloud, with a strong bias for action and outcomes. If you’re an inspirational leader comfortable translating vague problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This position reports to the Senior Manager, Reporting and Analytics. This position can be based in our Bangalore, Pune, office or work remotely from a India location. We’re looking for someone to join us immediately.

What you’ll do: 

  • Design and build the next-generation data platform for the construction industry
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Contribute to setting standards and development principles across multiple teams and the larger organization
  • Stay connected with other architectural initiatives and craft a data platform architecture that supports and drives our overall platform
  • Provide technical leadership to efforts around building a robust and scalable data pipeline to support billions of events
  • Help identify and propose solutions for technical and organizational gaps in our data pipeline by running proof of concepts and experiments working with Data Platform Engineers on implementation
  • Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in the data space to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools

What we’re looking for: 

  • Bachelor’s degree in Computer Science, a similar technical field of study, or equivalent practical experience is required; MS or Ph.D. degree in Computer Science or a related field is preferred
  • 10+ years of experience building and operating cloud-based, highly available, and scalable online serving or streaming systems utilizing large, diverse data sets in production
  • Expertise with diverse data technologies like Databricks, PostgreSQL, GraphDB, NoSQL DB, Mongo, Cassandra, Elastic Search, Snowflake, etc.
  • Strength in the majority of commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Airflow, Kubernetes, Docker, Argo, Jenkins, or similar
  • Expertise with all aspects of data systems, including ETL, aggregation strategy, performance optimization, and technology trade-off
  • Understanding of data access patterns, streaming technology, data validation, data modeling, data performance, cost optimization
  • Experience defining data engineering/architecture best practices at a department and organizational level and establishing standards for operational excellence and code and data quality at a multi-project level
  • Strong passion for learning, always open to new technologies and ideas
  • AWS and Azure experience is preferred

Qualifications

See more jobs at Procore Technologies

Apply for this job

3h

Staff Data Engineer

Procore TechnologiesBangalore, India, Remote
scalaairflowsqlDesignUXjavakubernetespython

Procore Technologies is hiring a Remote Staff Data Engineer

Job Description

We’re looking for a Staff Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Staff Data Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the movement, transformation, and retrieval of data for use in analytics, machine learning, and service integration. To be successful in this role, you’re passionate about distributed systems including storage, streaming, and batch data processing technologies on the cloud, with a strong bias for action and outcomes. If you’re a seasoned data engineer comfortable and excited about building our next-generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This is a full-time position and will report to our Senior Manager of Software Engineering and will be based in the India office, but employees can choose to work remotely. We are looking for someone to join our team immediately.

What you’ll do: 

  • Participate in the design and implementation of our next-generation data platform for the construction industry
  • Define and implement operational and dimensional data models and transformation pipelines to support reporting and analytics
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Understand our current data models and infrastructure, proactively identify areas for improvement, and prescribe architectural recommendations with a focus on performance and accessibility. 
  • Work alongside our Product, UX, and IT teams, leveraging your expertise in the data space to influence our product roadmap, developing innovative solutions that add additional value to our platform
  • Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
  • Stay up to date with the latest data technology trends

What we’re looking for: 

  • Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience 
  • 8+ years of experience building and operating cloud-based, highly available, and scalable data platforms and pipelines supporting vast amounts of data for reporting and analytics
  • 2+ years of experience building data warehouses in Snowflake or Redshift
  • Hands-on experience with MPP query engines like Snowflake, Presto, Dremio, and Spark SQL
  • Expertise in relational, dimensional data modeling.
  • Understanding of data access patterns, streaming technology, data validation, performance optimization, and cost optimization
  • Strength in commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Flink, Airflow, Kubernetes, or similar
  • Strong passion for learning, always open to new technologies and ideas

Qualifications

See more jobs at Procore Technologies

Apply for this job

11h

Software Engineer - Infrastructure Platforms

CloudflareAustin or Remote US
airflowpostgressqlDesignansibledockerpostgresqlmysqlkuberneteslinuxpython

Cloudflare is hiring a Remote Software Engineer - Infrastructure Platforms

About Us

At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Available Locations: Remote - US

About the Role

An engineering role at Cloudflare provides an opportunity to address some big challenges, at scale.  We believe that with our talented team, we can solve some of the biggest security, reliability and performance problems facing the Internet. Just how big?  

  • We have in excess of 15 Terabits of network transit capacity
  • We operate 250 Points-of-presence around the world
  • We serve more traffic than Twitter, Amazon, Apple, Instagram, Bing, & Wikipedia combined
  • Anytime we push code, it immediately affects over 200 million internet users
  • Every day, up to 20,000 new customers sign-up for Cloudflare service
  • Every week, the average Internet user touches us more than 500 times

We are looking for talented Software Engineers to build and develop the platform which makes Cloudflare customers place their trust in us.  Our Software Engineers come from a variety of technical backgrounds and have built up their knowledge working in different environments. But the common factors across all of our reliability-focused engineers include a passion for automation, scalability, and operational excellence.  Our Infrastructure Engineering team focuses on the automation to scale our infrastructure.

Our team is well-funded and focused on building an extraordinary company.  This is a superb opportunity to join a high-performing team and scale our high-growth network as Cloudflare’s business grows.  You will build tools to constantly improve our scale and speed of deployment.  You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale.   

Infrastructure Platforms Software Engineers inside our Resiliency organization focus on building and maintaining the reliable and scalable underlying platforms that act as sources of truth and foundations for automation of Cloudflare’s hardware, network, and datacenter infrastructure. We interface with SRE, Network Engineering, Datacenter Engineering and other Infrastructure and Reliability teams to ensure their ongoing needs are met by the platforms we provide.

Many of our Software Engineers have had the opportunity to work at multiple offices on interim and long-term project assignments. The ideal Software Engineering candidate has a passionate curiosity about how the Internet fundamentally works and has a strong knowledge of Linux and Hardware.  We require strong coding ability in Rust and Python. We prefer to hire experienced candidates; however raw skill trumps experience and we welcome strong junior applicants.

Required Skills

  • Intermediate level software development skills in Rust and Python
  • Linux systems administration experience
  • 5 years of relevant software development experience
  • Strong skills in network services and Rest APIs
  • SQL databases (Postgres or MySQL)
  • Self-starter; able to work independently based on high-level requirements

 

Examples of desirable skills, knowledge and experience

  • 5 years of relevant work experience
  • Prior experience working with Diesel and common database patterns in Rust
  • Configuration management systems such as Saltstack, Chef, Puppet or Ansible
  • Prior experience working with datacenter infrastructure automation at scale
  • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache
  • The ability to understand service metrics and visualize them using Grafana and Prometheus
  • Key/Value stores (Redis, KeyDB, CouchBase, KyotoTycoon, Cassandra, LevelDB)

 

Bonus Points

  • Experience with programming languages other than those listed in requirements.
  • Network fundamentals DHCP, subnetting, routing, firewalls, IPv6
  • Experience with continuous integration and deployment pipelines
  • Performance analysis and debugging with tools like perf, sar, strace, gdb, dtrace, strace
  • Experience developing systems that are highly available and redundant across regions
  • Experience with the Linux kernel and Linux software packaging
  • Internetworking and BGP

 

Some tools that we use

  • Rust
  • Python
  • Diesel
  • Actix
  • Tokio
  • Apache Airflow 
  • Salt
  • Netbox
  • Docker
  • Kubernetes
  • Nginx
  • PostgreSQL
  • Redis
  • Prometheus

 

About the Team

At Cloudflare, our engineering and research team combines the expertise of some of the industry’s most talented professionals in both software engineering and advanced research. Our bot and fraud detection research and development team focuses on innovating and developing leading-edge solutions to combat online fraud and bot activities, thereby ensuring the highest levels of security and integrity for online platforms. Collaborating closely with product development and engineering groups, our researchers play a crucial role in advancing our fraud detection products through identification of new signals and refining our bot detection algorithms.

What You'll Do

As a researcher in our bot and fraud detection team, you will:

  • Engage in cutting-edge research to design, develop, and enhance our fraud detection products.
  • Apply your knowledge in data science and machine learning to analyze and interpret vast datasets, contributing to the fight against sophisticated online attackers.
  • Collaborate with cross-functional teams to integrate research findings into practical, scalable solutions.
  • Utilize and improve upon our technology stack, which includes Python, Rust, Kafka, Kubernetes, PostgreSQL, and Clickhouse.
  • Make significant contributions to the field of bot and fraud detection, impacting the security of online applications globally.

What Are We Looking For?

  • Advanced degree (PhD or Masters) in the fields of Computer Science, Data Science, or Cybersecurity.
  • Proven track record of research in academia or industry, preferably in areas related to cybersecurity, bot detection, browser fingerprinting, fraud detection, or machine learning.
  • Expertise in web security, network protocols, and web application architectures.
  • Demonstrated ability to work with large-scale datasets and distributed computing.

Bonus points

  • Experience in developing and implementing bot detection and fraud prevention strategies.
  • Publication record in peer-reviewed venues or industry conferences in relevant fields.
  • Familiarity with cloud computing environments and big data technologies.
  • Experience with productionizing machine learning models.
  • Experience with technologies such as Docker, Kubernetes, Salt.
  • Familiarity with writing and optimizing advanced SQL queries.
  • Experience with columnar databases such as Clickhouse.

Compensation

Compensation may be adjusted depending on work location.

  • For Colorado-based hires: Estimated annual salary of $137,000 - $152,000
  • For New York City, Washington, and California (excluding Bay Area) based hires: Estimated annual salary of $154,000- $171,000.
  • For Bay Area-based hires: Estimated annual salary of $162,000 - $180,000

Equity

This role is eligible to participate in Cloudflare’s equity plan.

Benefits

Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

Health & Welfare Benefits

  • Medical/Rx Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Accounts
  • Commuter Spending Accounts
  • Fertility & Family Forming Benefits
  • On-demand mental health support and Employee Assistance Program
  • Global Travel Medical Insurance

Financial Benefits

  • Short and Long Term Disability Insurance
  • Life & Accident Insurance
  • 401(k) Retirement Savings Plan
  • Employee Stock Participation Plan

Time Off

  • Flexible paid time off covering vacation and sick leave
  • Leave programs, including parental, pregnancy health, medical, and bereavement leave

 

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.

 

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

1d

Staff Data Scientist - Sales & Account Management

SquareSan Francisco, CA, Remote
Bachelor degreetableauairflowsqlDesignpython

Square is hiring a Remote Staff Data Scientist - Sales & Account Management

Job Description

The Cash App Data Science (DS) organization is growing and we are looking for a Data Scientist to join the team, embedded within our Sales and Account Management domain. You will be responsible for deriving valuable insights from our extremely unique datasets as well as developing models, forecasts, analyses, reports to help achieve merchant acquisition, retention, growth and profitability goals.

You will:

  • Partner directly with the Cash App Sales & AM team, working closely with operations, strategy, engineers, account executives/managers and leads
  • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to key stakeholders
  • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand merchant behavior
  • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
  • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
  • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
  • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
  • Build and share data visualizations and self-serve dashboards for your partners
  • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

Qualifications

You have:

  • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in marketplace platforms specially on the merchant side, would be great to have.
  • A bachelor degree in statistics, data science, or similar STEM field with 8+ years of experience in a relevant role OR
  • A graduate degree in statistics, data science, or similar STEM field with 6+ years of experience in a relevant role
  • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
  • Experience with scripting and data analysis programming languages, such as Python or R
  • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities

Technologies we use and teach:

  • SQL, Snowflake, etc.
  • Python (Pandas, Numpy)
  • Looker, Mode, Tableau, Prefect, Airflow

See more jobs at Square

Apply for this job

1d

Senior Data Scientist - Support

SquareSan Francisco, CA, Remote
Bachelor degreetableauairflowsqlDesignpython

Square is hiring a Remote Senior Data Scientist - Support

Job Description

The Cash App Support organization is growing and we are looking for a Data Scientist (DS) to join the team. The DS team at Cash derives valuable insights from our extremely unique datasets and turn those insights into actions that improve the experience for our customers every day. In this role, you’ll be embedded in our Support org and work closely with operations and other cross-functional partners to drive meaningful change for how our customers interact with the Support team and resolve issues with their accounts. 

You will:

  • Partner directly with a Cash App customer support team, working closely with operations, engineers, and machine learning
  • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to the operations team and other key stakeholders
  • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand advocate and customer behavior
  • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
  • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
  • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
  • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
  • Build and share data visualizations and self-serve dashboards for your partners
  • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

Qualifications

You have:

  • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in customer support problems would be great to have
  • A bachelor degree in statistics, data science, or similar STEM field with 5+ years of experience in a relevant role OR
  • A graduate degree in statistics, data science, or similar STEM field with 2+ years of experience in a relevant role
  • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
  • Experience with scripting and data analysis programming languages, such as Python or R
  • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities
  • Experience in a high-growth tech environment

Technologies we use and teach:

  • SQL, Snowflake, etc.
  • Python (Pandas, Numpy)
  • Looker, Mode, Tableau, Prefect, Airflow

See more jobs at Square

Apply for this job

1d

Java Solution Architect

Version1Málaga, Spain, Remote
airfloworacleDesignapijavapythonAWS

Version1 is hiring a Remote Java Solution Architect

Job Description

Java Solution Architect

MUST BE BASED WITHIN 50 MILES OF EDINBURGH, LONDON, BIRMINGHAM, MANCHESTER, NEWCASTLE, DUBLIN, OR BELFAST

REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE.

Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries?

How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle EMEA Partner Awards), Global Microsoft Modernising Applications Partner of the Year (2023) and AWS Collaboration Partner of the Year (2023) who would give you the opportunity to undertake accreditations and educational assistance for courses relevant to your role?

Here at Version 1, we are currently in the market for experienced Java Solution Architect to join our growing Digital, Data & Cloud Practice.

You will have the opportunity to work with the latest technology and worked on projects across a multiplicity of sectors and industries.

Java Solution Architect

Job Description

You will be:

  • Leading the development of Java and Python development projects.
  • Designing and develop API integrations using Spark.
  • Collaborating with clients and internal teams to understand business requirements and translate them into HLD and LLD solutions.
  • Defining the architecture and technical design.
  • Designing data flows and integrations using Hadoop.
  • Working with the product team and testers to implement throughout testing.
  • Creating and developing comprehensive documentation, including solution architecture, design, and user guides.
  • Providing training and support to end-users and client teams.
  • Staying up to date with the latest trends and best practices, and share knowledge with the team.

Qualifications

You will have expertise within the following:

  • Java, Python, Spark, Hadoop (Essential)
  • Trino, Airflow (Desirable)
  • Architecture and capabilities.
  • Designing and implementing complex solutions with a focus on scalability and security.
  • Excellent communication and collaboration skills.

Apply for this job

2d

(Senior) Python Engineer, Data Group

WoltStockholm, Sweden, Remote
airflowkubernetespython

Wolt is hiring a Remote (Senior) Python Engineer, Data Group

Job Description

Data at Wolt

As the scale of Wolt has rapidly grown, we are introducing new users to our data platform every day and want this to become a coherent and streamlined experience for all users, whether they’re Analysts, Data Scientists working with our data or teams bringing new data to the platform from their applications. We aim to both provide new platform capabilities across batch, streaming, orchestration and data integration to serve our user’s needs, as well as building an intuitive interface for them to solve their use cases without having to learn the details of the underlying tools.

In the context of this role we are hiring an experienced Senior Software Engineer to provide technical leadership and individual contribution in one the following workstreams:

Data Governance

Wolt’s Data Group has already developed an initial foundational tooling in the areas of data management, security, auditing, data catalog and quality monitoring, but through your technical contributions you will ensure our Data Governance tooling is state of the art. You’ll be improving the current Data Governance platform, making sure it can be further integrated with the rest of the Data Platform and Wolt Services in a scalable, secure, compliant way, without significant disruptions to the teams. 

Data Experience

We want to ensure our Analysts, Data Scientists, and Engineers can discover, understand, and publish high-quality data at scale. We have recently released a new data platform tool which enables simple, yet powerful creation of workflows via a declarative interface. You will help us ensure our users succeed in their work with effective and polished user experiences by developing our internal user-facing tooling and curating our documentation to the highest standards. And what's best, you get to work closely with excited users to get continuous feedback about released features while supporting and onboarding them to new workflows.

Data Lakehouse

We recently started this workstream to manage data integration, organization, and maintenance of our new Iceberg based data lakehouse architecture. Together, we build and maintain ingestion pipelines to efficiently gather data from diverse sources, ensuring seamless data flow. We create and manage workflows to transform raw data into structured formats, guaranteeing data quality and accessibility for analytics and machine learning purposes.

At the time you’ll join we’ll match you with one of these work streams based on our needs and your skills, experience and preferences.

How we work

Our teams have a lot of autonomy and ownership in how they work and solve their challenges. We value collaboration, learning from each other and helping each other out to achieve the team’s goals. We create an environment of trust, in which everyone’s ideas are heard and where we challenge each other to find the best solutions. We have empathy towards our users and other teams. Even though we’re working in a mostly remote environment these days, we stay connected and don’t forget to have fun together building great software!

Our tech stack

Our primary programming language of choice is Python. We deploy our systems in Kubernetes and AWS. We use Datadog for observability (logging and metrics). We have built our data warehouse on top of Snowflake and orchestrate our batch processes with Airflow and Dagster. We are heavy users of Kafka and Kafka Connect. Our CI/CD pipelines rely on GitHub actions and Argo Workflows.

Qualifications

The vast majority of our services, applications and data pipelines are written in Python, so several years of having shipped production quality software in high throughput environments written in Python is essential. You should be very comfortable with typing, dependency management, unit-, integration- and end-to-end tests. If you believe that software isn’t just a program running on a machine, but the solution to someone’s problem, you’re in the right place.

Having previous experience in planning and executing complex projects that touch multiple teams/stakeholders and run across a whole organization is a big plus.Good communication and collaboration skills are essential, and you shouldn’t shy away from problems, but be able to discuss them in a constructive way with your team and the Wolt Product team at large.

Familiarity with parts of our tech stack is definitely a plus, but we hire for attitude and ability to learn over knowing a specific technology that can be learned.

The tools we are building inside of the data platform ultimately serve our many stakeholders across the whole company, whether they are Analysts, Data Scientists or engineers in other teams that produce or consume data. 

We want all of our users to love the tools we’re building and that is why we want you to focus on building intuitive and user friendly applications that enable everyone to use and work with data at Wolt.

See more jobs at Wolt

Apply for this job

3d

Data Engineer PySpark AWS

2 years of experienceagileBachelor's degreejiraterraformscalaairflowpostgressqloracleDesignmongodbjavamysqljenkinspythonAWS

FuseMachines is hiring a Remote Data Engineer PySpark AWS

Data Engineer PySpark AWS - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

3d

Senior Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Senior Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant son expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur GCP, en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des requêtes SQL et des processus ETL pour garantir des temps de réponse rapides et une scalabilité.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Restez à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

Qualifications

  • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
  • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
  • Certification GCP (Google Cloud Platform) est un plus.
  • Très bonne communication écrite et orale (livrables et reportings de qualité)

See more jobs at Devoteam

Apply for this job

5d

Machine Learning Engineer (All Genders)

DailymotionParis, France, Remote
airflowDesigndockerpython

Dailymotion is hiring a Remote Machine Learning Engineer (All Genders)

Job Description

Joining the Dailymotion data team means taking part in the creation of our unique algorithms, designed to bring more diversity and nuance to online conversations.

Our Machine Learning team, established in 2016, has been actively involved in developing models across a diverse range of topics. Primarily, we focus on recommender systems, and extend our expertise to content classification, moderation, and search functionalities.

You will be joining a seasoned and diverse team of Senior Machine Learning Engineers, who possess the capability to independently conceptualize, deploy, A/B test, and monitor their models.

We collaborate closely with the Data Product Team, aligning our efforts to make impactful, data-driven decisions for our users.

Learn more about our ongoing projects:https://medium.com/dailymotion

As a Machine Learning Engineer, you will:

  • Design and deploy scalable recommender systems, handling billions of user interactions and hundreds of millions of videos.
  • Contribute to various projects spanning machine learning domains, encompassing content classification, moderation, and ad-tech.
  • Foster autonomy, taking ownership of your scope, and actively contribute ideas and solutions. Maintain and monitor your models in production.
  • Collaborate with cross-functional teams throughout the entire machine learning model development cycle:
    • Define success metrics in collaboration with stakeholders.
    • Engage in data collection and hypothesis selection with the support of the Data Analysts Team.
    • Conduct machine learning experiments, including feature engineering, model selection, offline validation, and A/B Testing.
    • Manage deployment, orchestration, and maintenance on cloud platforms with the Data Engineering Team.

Qualifications

  • Master's degree/PhD in Machine Learning, Computer Science, or a related quantitative field.
  • At least 1 year of professional experience working with machine learning models at scale (experience with recommender systems is a plus).
  • Proficient in machine learning concepts, with the ability to articulate theoretical concepts effectively.
  • Strong coding skills in Python & SQL.
  • Experience in building production ML systems; familiarity with technologies such as GitHub, Docker, Airflow, or equivalent services provided by GCP/AWS/Azure.
  • Experience with distributed frameworks is advantageous (Dataflow, Spark, etc.).
  • Strong business acumen and excellent communication skills in both English and French (fluent proficiency).
  • Demonstrated aptitude for autonomy and proactivity is highly valued.

See more jobs at Dailymotion

Apply for this job

5d

Staff Site Reliability Engineer

MozillaRemote US
6 years of experienceterraformairflowsqlDesignansibleazurejavac++openstackdockerelasticsearchkubernetesjenkinspythonAWSbackendNode.js

Mozilla is hiring a Remote Staff Site Reliability Engineer


Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than225million people around the world using our products each month, we’re shaping the next 25 years of technology. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for everyone. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with thousands of volunteer contributors and collaborators all over the world, Mozillians design, build and distributeopen-sourcesoftware that enables people to enjoy the internet on their terms. 

About this team and role:

Mozilla’s Release SRE Team is looking for a Staff SRE to help us build and maintain infrastructure that supports Mozilla products. You will combine skills from DevOps/SRE, systems administration, and software development to influence product architecture and evolution by crafting reliable cloud-based infrastructure for internal and external services.

As a Staff SRE you will work closely with Mozilla’s engineering and product teams and participate in significant engineering projects across the company. You will collaborate with hardworking engineers across different levels of experience and backgrounds. Most of your work will involve improving existing systems, building new infrastructure, evaluating tools and eliminating toil.

What you’ll do:

  • Manage infrastructure in AWS and GCP
  • Write, maintain, and expand automation scripts, metrics and monitoring tooling, and orchestration recipes
  • Lead otherSREs and software development teams to deliver products with an eye on reliability and automation
  • Demonstrate accountability in the delivery of work
  • Spot and raise potential issues to the team
  • Be on-call for production services and infrastructure
  • Be trusted to resolve unclear but urgent tasks
What you’ll bring:
  • Degree and 6 years of experience related to either backend software development or cloud operations or experience related DevOps/SRE
  • Experience programming in at least one of the following languages: Python, Java, C/C++, Go, Node.js or Rust. 
  • Involvement in running services in the cloud
  • Kubernetes administration and optimization
  • Proven understanding of database systems (SQL and/or non-relational databases)
  • Infrastructure As Code and Configuration as Code tooling (Puppet, Chef, Ansible, Salt, Terraform, Amazon Cloudformation or Google Cloud Deployment Manager)
  • Strong communication skills
  • Curiosity and interest in learning new things
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit
Bonus points for…
  • CI/CD orchestration (Jenkins, CircleCI, or TravisCI)
  • ETL, data modeling, cloud-based data storage, processing
  • GCP Data Services (Dataflow, BigQuery, Dataproc)
  • Workflow and data pipeline orchestration (Airflow, Oozie, Jenkins, etc)
  • Container orchestration technologies (Kubernetes, OpenStack, Docker swarm, etc)
  • Open source software involvement
  • Monitoring/Logging with technologies like Splunk, ElasticSearch, Logstash/Fluentd, Stackdriver, Time-series databases like InfluxDB etc.

What you’ll get:

  • Generous performance-based bonus plans to all regular employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

Mozilla exists to build the Internet as a public resource accessible to all because we believe that open and free is better than closed and controlled. When you work at Mozilla, you give yourself a chance to make a difference in the lives of Web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the Web as the platform and help create more opportunity and innovation for everyone online.

Commitment to diversity, equity, inclusion, and belonging

Mozilla understands that valuing diverse creative practices and forms of knowledge are crucial to and enrich the company’s core mission.  We encourage applications from everyone, including members of all equity-seeking communities, such as (but certainly not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations,gender identities, and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws.  Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.

Group: C

#LI-REMOTE

Req ID: R2515

To learn more about our Hiring Range System, please click this link.

Hiring Ranges:

US Tier 1 Locations
$163,000$239,000 USD
US Tier 2 Locations
$150,000$220,000 USD
US Tier 3 Locations
$138,000$203,000 USD

See more jobs at Mozilla

Apply for this job

5d

Senior Data Engineer (Data Competency Center)

Sigma SoftwareWarsaw, Poland, Remote
tableaunosqlairflowsqlpython

Sigma Software is hiring a Remote Senior Data Engineer (Data Competency Center)

Job Description

  • Pre-sales collaboration: Collaborating with solution architects, project managers, and business analysts to gather requirements, perform investigations, and provide estimations for potential projects
  • Project initiation: Taking the lead in driving new projects and being a key driver in their success
  • Project execution: Becoming a part of the team for one of the opportunities in the pipeline, fulfilling the Data Engineer role
  • Contributing to and spearheading Data Engineering excellence: Researching technology trends and conducting best practices analysis to ensure our solutions remain state-of-the-art

Qualifications

  • Proficiency in building, maintaining, testing, and delivering large-scale data pipelines
  • ETL/ELT expertise: Practical experience with at least one end-to-end solution and its start, development, and maintenance.
  • Proficiency in working with large data extraction, aggregation, and manipulation using a selected database. Experience and strong knowledge of SQL and NoSQL databases. Understanding the pros and cons of different types of databases, experience in data modeling, and database optimizations
  • At least 3-5 years of proficiency in Python or Skala for data processing and transformation
  • Hands-on experience with main frameworks and libraries in Data Engineering domains: Spark, Hive, Kafka, Airflow, Flink, etc., with a proven record of debugging and optimization experience
  • Experience with CI/CD in data engineering
  • Experience with cloud-based data processing solutions

WILL BE A PLUS:

  • Knowledge of K8s orchestrations
  • Exposure to OLAP tools like Tableau, Qlik, Grafana, or similar
  • Databricks experience/certification

See more jobs at Sigma Software

Apply for this job

6d

Data Manager

remote-firsttableauairflowsqlpython

Parsley Health is hiring a Remote Data Manager

About us:

Parsley Health is a digital health company with a mission to transform the health of everyone, everywhere with the world's best possible medicine. Today, Parsley Health is the nation's largest health care company helping people suffering from chronic conditions find relief with root cause resolution medicine. Our work is inspired by our members’ journeys and our actions are focused on impact and results.

The opportunity:

We’re hiring an experienced Manager of Data to drive the data strategy for Parsley Health: by championing quality data across the organization and leading functions for data science and analytics along with data engineering.

This person should have knowledge of the healthcare space, specifically related to health outcomes and benchmarks and will report into the Chief Technology Officer.

What you’ll do:

  • Passionate about our mission to live healthier through revolutionary primary care, excited for the future of healthcare, and a personal belief in wellness.
  • Collaborate on strategic direction with the leadership team and executives to evolve our mid and long term roadmap
  • Hands-on manager who will write code and has experience in a variety of different systems and architecture, analysis, and presentation. 
  • Support identifying clinical outcomes and publishing papers with the clinical team and SVP of Clinical Operations.
  • Empower high quality product decisions through data analysis.
  • Develop machine learning models to better assist our members’ health care needs.
  • Foster a strong culture of data-driven decision making through training and mentorship within your team and across the company.
  • Implement and maintain a world-class data stack that empowers data consumers with reliable, accessible, compliant insights.
  • Consult with data consumers to improve their measurement strategies.
  • Manage a team of two members and grow it to a multi disciplinary function within a few years. 

What you’ll need:

  • Experience in building a data strategy for a small team or company. Potentially previously the first data hire at a company (not required). 
  • Proficient in statistical methods.
  • Loves to deep dive into problems and solutioning to identify root causes and be able to extrapolate a big picture strategy or story. 
  • Helps people with their careers while creating and improving upon structures to enable career growth 
  • Sets up processes and governance around project management, data quality, prioritization, etc.
  • Well versed in SQL, at least one scripting language (R, Python, etc.), Excel, and BI platforms (Looker, Tableau, etc.).

Tech stack

  • Python
  • GCP
  • Airflow
  • SQL
  • Looker
  • Dataform (dbt)

Benefits and Compensation:

  • Equity Stake
  • 401(k) + Employer Matching program
  • Remote-first with the option to work from one of our centers in NYC or LA 
  • Complimentary Parsley Health Complete Care membership
  • Subsidized Medical, Dental, and Vision insurance plan options
  • Generous 4+ weeks of paid time off
  • Annual professional development stipend

Parsley Health is committed to providing an equitable, fair and transparent compensation program for all employees.

The starting salary for this role is between $165,750 - $195,000, depending on skills and experience. We take a geo-neutral approach to compensation within the US, meaning that we pay based on job function and level, not location.

Individual compensation decisions are based on a number of factors, including experience level, skillset, and balancing internal equity relative to peers at the company. We expect the majority of the candidates who are offered roles at our company to fall healthily throughout the range based on these factors. We recognize that the person we hire may be less experienced (or more senior) than this job description as posted. If that ends up being the case, the updated salary range will be communicated with candidates during the process.


At Parsley Health we believe in celebrating everything that makes us human and are proud to be an equal opportunity workplace. We embrace diversity and are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe that the more inclusive we are, the better we can serve our members. 


Important note:

In light of recent increase in hiring scams, if you're selected to move onto the next phase of our hiring process, a member of our Talent Acquisition team will reach out to you directly from an@parsleyhealth.comemail address to guide you through our interview process. 

    Please note: 

  • We will never communicate with you via Microsoft Teams
  • We will never ask for your bank account information at any point during the recruitment process, nor will we send you a check (electronic or physical) to purchase home office equipment

We look forward to connecting!

#LI-Remote

See more jobs at Parsley Health

Apply for this job

7d

Senior AI Scientist(Taiwan)

GOGOXRemote
airflowsqlDesignazureapijavapythonAWS

GOGOX is hiring a Remote Senior AI Scientist(Taiwan)

Senior AI Scientist(Taiwan) - GoGoX - Career Page

See more jobs at GOGOX

Apply for this job

7d

Manager, Software Engineering - Data Platform

SamsaraCanada - Remote
Master’s DegreeterraformairflowkubernetesAWS

Samsara is hiring a Remote Manager, Software Engineering - Data Platform

Who we are

Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale.

Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, Equipment Monitoring, and Site Visibility. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term. 

Recent awards we’ve won include:

Glassdoor's Best Places to Work 2024

Best Places to Work by Built In 2024

Great Place To Work Certified™ 2023

Fast Company's Best Workplaces for Innovators 2023

Financial Times The Americas’ Fastest Growing Companies 2023

We see a profound opportunity for data to improve the safety, efficiency, and sustainability of operations, and hope you consider joining us on this exciting journey. 

Click hereto learn more about Samsara's cultural philosophy.

About the role:

The Samsara Data Platform team owns and develops the analytic platform across Samsara. As a Manager II of Data Platform, you will build and lead teams that maintain our data lake and surrounding infrastructure. You will also be responsible for meeting new business needs, including expanding the platform as the company grows (both in size and geographic coverage), privacy and security needs, and customer-facing feature developments.

You should apply if:

  • You want to impact the industries that run our world: The software, firmware, and hardware you build will result in real-world impact—helping to keep the lights on, get food into grocery stores, and most importantly, ensure workers return home safely.
  • You want to build for scale: With over 2.3 million IoT devices deployed to our global customers, you will work on a range of new and mature technologies driving scalable innovation for customers across industries driving the world's physical operations.
  • You are a life-long learner: We have ambitious goals. Every Samsarian has a growth mindset as we work with a wide range of technologies, challenges, and customers that push us to learn on the go.
  • You believe customers are more than a number:Samsara engineers enjoy a rare closeness to the end user and you will have the opportunity to participate in customer interviews, collaborate with customer success and product managers, and use metrics to ensure our work is translating into better customer outcomes.
  • You are a team player: Working on our Samsara Engineering teams requires a mix of independent effort and collaboration. Motivated by our mission, we’re all racing toward our connected operations vision, and we intend to win—together.

Click hereto learn about what we value at Samsara. 

In this role, you will: 

  • Lead a team of data-focused engineers to build and maintain a stable, scalable, and modern data platform capable of handling petabytes of data. 
  • Help drive long-term planning and establish scalable processes for execution
  • Actively contribute to building the data roadmap for Samsara
  • Stay connected to novel technological developments that suit Samsara’s needs.
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices
  • Hire, develop and lead an inclusive, engaged, and high-performing international team

Minimum requirements for the role:

  • BS, MS, or PhD in Computer Science or other related technical degree
  • 2+ years of technical people management experience5+ years of relevant technical experience with data infrastructure
  • Experience building and deploying large-scale data platform systems with feedback loops for continuous improvement
  • Comfortable leading infrastructure development in collaboration with cross functional teams, scientists, and researchers

An ideal candidate also has:

  • MS or PhD in Computer Science or other technical degree
  • Experience with state-of-art data platform technologies such as:
    • AWS (S3 and RDS, SQS, DMS, Dynamo, etc.)
    • Spark a must, Flink, Trino/Presto a plus
    • Data lake file formats such as Delta, Hudi, or Iceberg
    • Python/Scala
    • Container based orchestration services such as Kubernetes, ECS, Fargate, etc.
    • Infrastructure as Code tools (e.g., Terraform)
    • Go is a plus
    • Data orchestration system experience is a plus (e.g., Airflow, Dagster)
  • Proven track record for innovation and delivering value to customers (both internal and external).
  • Demonstrated ability to build cross-functional consensus and drive cross-team collaboration

Samsara’s Compensation Philosophy:Samsara’s compensation program is designed to deliver Total Direct Compensation (based on role, level, and geography) that is at or above market. We do this through our base salary + bonus/variable + restricted stock unit awards (RSUs) for eligible roles.  For eligible roles, a new hire RSU award may be awarded at the time of hire, and additional RSU refresh grants may be awarded annually. 

We pay for performance, and top performers in eligible roles may receive above-market equity refresh awards which allow employees to achieve higher market positioning.

The range of annual base salary for full-time employees for this position is below. Please note that base pay offered may vary depending on factors including your city of residence, job-related knowledge, skills, and experience.
$142,800$184,800 USD

At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems. We are committed to increasing diversity across our team and ensuring that Samsara is a place where people from all backgrounds can make an impact.

Benefits

Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, Samsara for Good charity fund, and much, much more. Take a look at our Benefits site to learn more.

Accommodations 

Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click hereif you require any reasonable accommodations throughout the recruiting process.

Flexible Working 

At Samsara, we haveadopted a flexible way of working, enabling teams and individuals to do their best work, regardless of where they’re based. We value in-person collaboration and know a change of scenery and quiet space to work is welcomed from time to time, but also appreciate that the world of work has changed. Our offices remain open for those who prefer to collaborate or work in-office, but we also encourage fully remote applicants.As most roles are not required to be in the office, we are able to hire remotely where Samsara has an established presence. If a role is required to be in a certain location and candidates do not have work authorization for that location, Samsara will conduct an immigration assessment. If the role is not required to be in a specific location, Samsara will move forward with the remote location that works best for the business. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company. 

Fraudulent Employment Offers

Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in ‘@samsara.com’ or ‘@us-greenhouse-mail.io’. For more information regarding fraudulent employment offers, please visit our blog post here.

Apply for this job

7d

Senior Business Intelligence Engineer

SquareSan Francisco, CA, Remote
tableauairflowsqlDesignjavamysqlpython

Square is hiring a Remote Senior Business Intelligence Engineer

Job Description

The BI Team at Cash App enables our teams to make impactful business decisions. Our BI Engineers handle everything from data architecture and modeling to data pipeline tooling and dashboarding. As a Senior BI Engineer at Cash App, you will report to the BI Manager and work with Analysts, Data Scientists, Software Engineers and Product Managers to lay the foundation for analyzing our large, unique dataset. We are an extremely data-driven team - from understanding our customers, managing and operating our business, to informing product development. You will build, curate, document, and manage key datasets and ETLs to increase the impact of the entire team.

You will:

  • Create brand new and optimize existing data models for the most widely used Cash App events, entities, and processes
  • Standardize business and product metric definitions in curated and optimized datasets
  • Build pipelines out of our data warehouse
  • Teach (and encourage) others to self-serve while building tools that make it simpler and faster for them to do so
  • Promote data, analytics, and data model design best practices
  • Create dashboards that help our teams understand the performance of the business and help them make decisions

Qualifications

You have:

  • Background/knowledge in Computer Science, Applied Math, Engineering, Stats, Physics, or a something comparable
  • 5+ years of industry experience building complex, scalable ETLs for a variety of different business and product use cases
  • An interest in advancing Cash App's vision of building products for economic empowerment - this should be something that legitimately excites you

Technologies we use and teach:

  • SQL (MySQL, Snowflake, BigQuery, etc.)
  • Airflow, Looker and Tableau
  • Python and Java

See more jobs at Square

Apply for this job

7d

Principal Software Engineer - Backend - Python (Xpanse)

Palo Alto NetworksReston, VA, Remote
airflowDesignjavaelasticsearchpython

Palo Alto Networks is hiring a Remote Principal Software Engineer - Backend - Python (Xpanse)

Job Description

Your Career

We’re looking for an experienced Data Engineer Developer to join the Cyber Research Engineering team at Cortex.

Cyber Research Engineering is a cross-functional team composed of people with backgrounds at the nexus of software engineering, data analysis, threat hunt, and national security. We're responsible for performing rapid prototyping and special projects using unique datasets to deliver cybersecurity insights for government, military, and commercial customers. We have several key responsibilities:

  • Operations - We collect data, perform specialized scanning, and provide provides advanced vulnerability testing
  • Analytics - We analyze threat intelligence, performs APT Threat Hunting, and conduct research projects tailored to the special needs public sector customers
  • Software Development - We builds tools for technical users, create prototypes for unique public sector use cases, and maintain/add features to our established cyber intelligence products

The work of the Cyber Research Engineering team is varied, exciting, and meaningful. We leverage endpoint and boundary devices, network traffic, internet scanning, malware sandboxes and many other datasets ranging in size from terabytes to petabytes. We encourage engineers and researchers to use these datasets in unconventional ways to produce unparalleled cybersecurity products and insights about computer networks that allow customers to better hunt those individuals who try to abuse those networks.

Your Impact

We’re looking for a data engineer to join the Cyber Research team at Xpanse, the latest addition to Palo Alto Networks Cortex. At Xpanse, you will:

  • Help bring a new threat intelligence product to market
  • Design and implement methods to store, query, and analyze datasets ranging in size from gigabytes to petabytes in unconventional ways to produce unparalleled cybersecurity insights
  • Develop novel techniques and approaches for understanding the Internet and characterizing data for insights relevant to cyber threat intelligence
  • Empower threat intelligence analysts at Palo Alto and its customers to counter bad actors across the internet

Qualifications

Your Experience

  • 6+ years of experience as a professional software engineer or data engineer for a SaaS business
  • Extensive experience in developing data-driven applications using Python
  • Strong familiarity with databases, data modeling, profiling, and performance optimization
  • Familiarity with search and retrieval problems over large datasets, including database query optimization
  • Excited to work closely with analysts to build a toolkit for discovering and tracking APT campaigns across the internet
  • High-level understanding of computer networks, protocols, and how the Internet works

Nice to have

  • Professional experience designing and implementing solutions for large-scale data storage and analysis, using tools such as Google BigQuery, ClickHouse, Google BigTable, Apache HBase, or ElasticSearch
  • Experience developing and maintaining stream-processing pipelines using Apache Apache Beam or Apache Spark
  • Familiarity with ETL management tools, such as Apache Airflow
  • Background or interest in threat intelligence and applied security
  • Familiarity with datasets associated with cyber threat hunting
  • Knowledge of Google Cloud Platform services
  • Familiarity with Java

See more jobs at Palo Alto Networks

Apply for this job

8d

Middle Product Analyst at HolyWater

GenesisУкраїна Remote
tableauairflowsqlB2CFirebasepythonAWS

Genesis is hiring a Remote Middle Product Analyst at HolyWater

ПІДТРИМУЄМО УКРАЇНУ ????????

Holy Water засуджує війну росії проти України й допомагає державі. На початку повномасштабної війни ми запустили продаж NFT-колекції про події в Україні, щоб зібрати 1 млн доларів на потреби української армії, а також долучилися до корпоративного благодійного фонду Genesis for Ukraine. Команда фонду закуповує необхідне спорядження, техніку й медикаменти для співробітників та їхніх родичів, що захищають країну на передовій, крім того, ми постійно донатимо на ЗСУ.

ЗУСТРІЧАЙТЕ СВОЮ МАЙБУТНЮ КОМАНДУ!

Ви будете працювати в Holy Water — це стартап в сфері ContentTech, який займається створенням та паблішингом книжок, аудіокнижок, інтерактивних історій та відео серіалів. Ми будуємо синергію між ефективністю AI та креативністю письменників, допомагаючи їм надихати своїм контентом десятки мільйонів користувачів у всьому світі.

HolyWater була заснована в 2020 році в екосистемі Genesis. З того часу команда зросла з 6 до 90 спеціалістів, а наші додатки неодноразово ставали лідерами в своїх категоріях в США, Австралії, Канаді та Європі.

За допомогою нашої платформи, ми даємо можливість будь-якому талановитому письменнику вийти на мільйону аудиторію користувачів наших додатків та надихати їх своїм історіями. Нашими продуктами користуються вже більше 10 мільйонів користувачів по всьому світу.

НАШІ ДОСЯГНЕННЯ ЗА 2023:

1. Наш додакток з інтерактивними історіями 3 місяці ставав топ 1 за завантаженнями у світі у своїй ніші.
2. Наш додаток з книжками, Passion, в грудні став топ 1 в своїй ніші в США та Європі.
3. Ми запустили платформу з відео серіалами на основі наших книжок та зробили перший успішний пілотний серіал.
4. Кількість нових завантажень та виручка зросли майже в 2 рази в порівнянні з 2022.

Основна цінність HolyWater - це люди, які працюють з нами. Саме тому ми прикладаємо всі зусилля, щоб створити такі умови, де кожен співробітник зможе реалізувати свій потенціал наповну та досягнути найамбітніших цілей.

КУЛЬТУРА КОМПАНІЇ

У своїй роботі команда спирається на шість ключових цінностей: постійне зростання, внутрішня мотивація, завзятість і гнучкість, усвідомленість, свобода та відповідальність, орієнтація на результат.

Зараз команда шукає Middle Product Analyst, котрий стане новим гравцем команди аналітиків.

ВАШІ ОБОВ'ЯЗКИ ВКЛЮЧАТИМУТЬ:

  • Генерацію гіпотез росту та запуск A/B тестів разом з продуктовою командою.
  • Підтримку аналітичних процесів під час проведення A/B-тестувань для оптимізації продуктових рішень.
  • Пошук точок зростання в продукті та маркетингу.
  • Взаємодію з продакт менеджерами, розробниками та маркетологами для безпосереднього впливу на продукт.
  • Автоматизацію процесів підготовки звітів для ефективного моніторингу показників.

ЩО ПОТРІБНО, АБИ ПРИЄДНАТИСЯ:

  • Досвід роботи на посаді Data Analyst / Scientist від 1-го року.
  • Досвід роботи з column-oriented storages (BigQuery, AWS Athena, etc.).
  • Навички роботи з SQL на професійному рівні.
  • Досвід розробки та візуалізації даних техніками BI (Tableau).
  • Досвід роботи з Amplitude, Firebase, AppsFlyer.
  • Відповідальність та проактивність.
  • Проєктне та логічне мислення.

БУДЕ ПЛЮСОМ:

  • Розуміння основ Python для аналітики.
  • Досвід роботи з Google Cloud Platform.
  • Досвід роботи з B2C мобільними застосунками.

ЩО МИ ПРОПОНУЄМО:

  • Ви будете частиною згуртованої команди професіоналів, де зможете обмінюватися знаннями та досвідом, а також отримувати підтримку та поради від колег.
  • Гнучкий графік роботи, можливість працювати віддалено з будь-якої безпечної точки світу.
  • Можливість відвідувати офіс на київському Подолі. В офісах можна не турбуватися про рутину: тут на вас чекають сніданки, обіди, безліч снеків та фруктів, лаунжзони, масаж та інші переваги ????
  • 20 робочих днів оплачуваної відпустки на рік, необмежена кількість лікарняних.
  • Медичне страхування.
  • Є можливість звернутися за консультацією до психолога.
  • Уся необхідна для роботи техніка.
  • У компанії ми активно застосовуємо сучасні інструменти та технології, такі як BigQuery, Tableau, Airflow, Airbyte і DBT. Це дасть вам можливість працювати з передовими інструментами та розширити свої навички в галузі аналітики.
  • Онлайн-бібліотека, регулярні лекції від спікерів топрівня, компенсація конференцій, тренінгів та семінарів.
  • Професійне внутрішнє ком’юніті для вашого кар’єрного розвитку.
  • Культура відкритого фідбеку.

ЕТАПИ ВІДБОРУ:

1. Первинний скринінг. Рекрутер ставить декілька запитань (телефоном або в месенджері), аби скласти враження про ваш досвід і навички перед співбесідою.
2. Тестове завдання.
Підтверджує вашу експертизу та показує, які підходи, інструменти й рішення ви застосовуєте в роботі. Ми не обмежуємо вас у часі та ніколи не використовуємо напрацювання кандидатів без відповідних домовленостей.
3. Співбесіда з менеджером.
Всеохопна розмова про ваші професійні компетенції та роботу команди, в яку подаєтесь.
4. Бар-рейзинг.
На останню співбесіду ми запрошуємо одного з топменеджерів екосистеми Genesis, який не працюватиме напряму з кандидатом. У фокусі бар-рейзера — ваші софт-скіли та цінності, аби зрозуміти, наскільки швидко ви зможете зростати разом з компанією.


Якщо ви готові прийняти виклик і приєднатися до нашої команди, то чекаємо на ваше резюме!

    See more jobs at Genesis

    Apply for this job

    8d

    Junior Analytics Engineer (HolyWater)

    GenesisКиїв, UA Remote
    tableauterraformairflowsqlpython

    Genesis is hiring a Remote Junior Analytics Engineer (HolyWater)

    ПІДТРИМУЄМО УКРАЇНУ ????????

    Holy Water засуджує війну росії проти України й допомагає державі. На початку повномасштабної війни ми запустили продаж NFT-колекції про події в Україні, щоб зібрати 1 млн доларів на потреби української армії, а також долучилися до корпоративного благодійного фонду Genesis for Ukraine. Команда фонду закуповує необхідне спорядження, техніку й медикаменти для співробітників та їхніх родичів, що захищають країну на передовій, крім того, ми постійно донатимо на ЗСУ.

    ЗУСТРІЧАЙТЕ СВОЮ МАЙБУТНЮ КОМАНДУ!

    Ви будете працювати в Holy Water — це стартап в сфері ContentTech, який займається створенням та паблішингом книжок, аудіокнижок, інтерактивних історій та відео серіалів. Ми будуємо синергію між ефективністю AI та креативністю письменників, допомагаючи їм надихати своїм контентом десятки мільйонів користувачів у всьому світі.

    HolyWater була заснована в 2020 році в екосистемі Genesis. З того часу команда зросла з 6 до 90 спеціалістів, а наші додатки неодноразово ставали лідерами в своїх категоріях в США, Австралії, Канаді та Європі.

    За допомогою нашої платформи, ми даємо можливість будь-якому талановитому письменнику вийти на мільйону аудиторію користувачів наших додатків та надихати їх своїм історіями. Нашими продуктами користуються вже більше 10 мільйонів користувачів по всьому світу.

    НАШІ ДОСЯГНЕННЯ ЗА 2023:

    1. Наш додаток з інтерактивними історіями 3 місяці ставав топ 1 за завантаженнями у світі у своїй ніші.
    2. Наш додаток з книжками, Passion, в грудні став топ 1 в своїй ніші в США та Європі.
    3. Ми запустили платформу з відео серіалами на основі наших книжок та зробили перший успішний пілотний серіал.
    4. Кількість нових завантажень та виручка зросли майже в 2 рази в порівнянні з 2022.

    Основна цінність HolyWater
    - це люди, які працюють з нами. Саме тому ми прикладаємо всі зусилля, щоб створити такі умови, де кожен співробітник зможе реалізувати свій потенціал наповну та досягнути найамбітніших цілей.

    КУЛЬТУРА КОМПАНІЇ

    У своїй роботі команда спирається на шість ключових цінностей: постійне зростання, внутрішня мотивація, завзятість і гнучкість, усвідомленість, свобода та відповідальність, орієнтація на результат.

    Зараз ми зосереджені на масштабуванні команди та пошуку людей, які допоможуть вивести наш застосунки на нові висоти. Якщо ви смілива, працьовита, допитлива, самосвідома людина, яка не боїться робити помилки та вчитися на них, давай поспілкуємось!

    Зараз ми шукаємо Junior Analytics Engineer, котрий стане частиною команди Data Engineering і буде залучений в побудову дата-платформи та керування даними.

    Оскільки компанія сповідує Data Informed підхід до прийняття рішень, якість даних та антикрихкість дата-платформи є важливими пріорітетами в розбудові архітектури проекту і безпосередньо впливає на швидкість розвитку і якість прийняття продуктових рішень. В роботі використовуємо найсучасніші підходи та інструменти a.k.a Modern Data Stack.

    ВАШІ ОБОВ'ЯЗКИ ВКЛЮЧАТИМУТЬ:

    • Інтеграцію third party APIs (AirByte, python).
    • Проведення дослідження даних (Exploratory Data Analysis).
    • Моделювання даних в DBT (SQL, Jinja).
    • Побудову pipelineʼів даних в Airflow (python).
    • Взаємодію із різними сервісами у екосистемі Google Cloud Platform.

    ЩО ПОТРІБНО, АБИ ПРИЄДНАТИСЯ:

    • Комерційний досвід роботи на посаді Data Analyst від 6 місяців та бажання розвиватися в Data Engineering.
    • Професійні навички роботи з Python, SQL.
    • Стане перевагою досвід роботи з GCP або іншими Cloud провайдерами.
    • Відповідальність та проактивність.
    • Уважність до деталей та здібність розібратись із незнайомими даними.

    ЩО МИ ПРОПОНУЄМО:

    • Можливість рости та постійно прокачувати свої навички: онлайн-бібліотека, регулярні лекції від спікерів топрівня, компенсація конференцій, тренінгів та семінарів.
    • Професійне внутрішнє ком’юніті для вашого кар’єрного розвитку (Analytics та Data Engineering).
    • Простір для втілення власних ідей та впливу на продукти.
    • Гнучкий графік роботи, можливість працювати віддалено з будь-якої безпечної точки світу, або відвідувати комфортний офіс на Подолі.
    • 20 робочих днів оплачуваної відпустки на рік, необмежена кількість лікарняних.
    • Медичне страхування.
    • Є можливість звернутися за консультацією до психолога.
    • Уся необхідна для роботи техніка.
    • У компанії ми активно застосовуємо сучасні інструменти та технології, такі як BigQuery, Tableau, Airflow, Airbyte, Terraform і DBT. Це дасть вам можливість працювати з передовими інструментами та вдосконалити свої інженерні навички.
    • Культура відкритого фідбеку.

    ЕТАПИ ВІДБОРУ:

    1. Первинний скринінг. Рекрутер ставить декілька запитань (телефоном або в месенджері), аби скласти враження про ваш досвід і навички перед співбесідою.
    2. Тестове завдання.
    Підтверджує вашу експертизу та показує, які підходи, інструменти й рішення ви застосовуєте в роботі. Ми не обмежуємо вас у часі та ніколи не використовуємо напрацювання кандидатів без відповідних домовленостей.
    3. Співбесіда з менеджером.
    Всеохопна розмова про ваші професійні компетенції та роботу команди, в яку подаєтесь.
    4. Бар-рейзинг.
    На останню співбесіду ми запрошуємо одного з топменеджерів екосистеми Genesis, який не працюватиме напряму з кандидатом. У фокусі бар-рейзера — ваші софт-скіли та цінності, аби зрозуміти, наскільки швидко ви зможете зростати разом з компанією.


    Хочеш стати частиною сильної команди? Відправляй своє резюме ????.

    See more jobs at Genesis

    Apply for this job

    9d

    Sr. Site Reliability Engineer IV

    Signify HealthDallas TX, Remote
    terraformairflowDesignmobileazurec++kubernetespythonAWS

    Signify Health is hiring a Remote Sr. Site Reliability Engineer IV

    How will this role have an impact?

    Join Signify Health's vibrant Site Reliability Engineering team as a Site Reliability Engineer. We’re seeking passionate individuals from diverse technical backgrounds. Reporting to the Manager of Site Reliability Engineering, we offer a collaborative environment that values each team member's unique contribution and fosters an inclusive culture.

    Your Role:

    • Developing strategies to improve the stability, scalability, and availability of our products.
    • Maintain and deploy observability solutions to optimize system performance.
    • Collaborate with cross-functional teams to enhance operational processes and service management.
    • Design, build, and maintain application stacks for product teams.
    • Create sustainable systems and services through automation.

    Skills We’re Seeking:

    • An eagerness to collaborate with and mentor others in the field of Site Reliability Engineering.
    • Strong familiarity with cloud environments (Azure, AWS, or GCP) and a desire to develop further expertise.
    • Advanced understanding of scripting languages, preferably with experience with Bash or Python, and programming languages, preferably with experience with Golang.
    • Advanced grasp of infrastructure as code, preferably with experience with Terraform.
    • Advanced understanding of Kubernetes and containerization technologies.
    • Advanced understanding of CI/CD principles and willingness to guide and enforce best practices.
    • Advanced understanding of Site Reliability and observability principles, preferably with experience with New Relic.
    • A proactive approach to identifying problems, performance bottlenecks, and areas for improvement.

    The base salary hiring range for this position is $108,900 to $189,700. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
    In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

    About Us:

    Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.

    Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.

    To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com

    Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.

    We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

    See more jobs at Signify Health

    Apply for this job