airflow Remote Jobs

100 Results

22d

Principal Data Engineer

GeminiRemote (USA)
remote-firstnosqlairflowsqlDesigncsspythonjavascript

Gemini is hiring a Remote Principal Data Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Analytics

The Role: Principal Data Engineer

As a member of our data engineering team, you'll be setting standards for data engineering solutions that have organizational impact. You'll provide Architectural solutions that are efficient, robust, extensible and are competitive within business and industry context. You'll collaborate with senior data engineers and analysts, guiding them towards their career goals at Gemini. Communicating your insights with leaders across the organization is paramount to success.

Responsibilities:

  • Focused on technical leadership, defining patterns and operational guidelines for their vertical(s)
  • Independently scopes, designs, and delivers solutions for large, complex challenges
  • Provides oversight, coaching and guidance through code and design reviews
  • Designs for scale and reliability with the future in mind. Can do critical R&D
  • Successfully plans and delivers complex, multi-team or system, long-term projects, including ones with external dependencies
  • Identifies problems that need to be solved and advocates for their prioritization
  • Owns one or more large, mission-critical systems at Gemini or multiple complex, team level projects, overseeing all aspects from design through implementation through operation
  • Collaborates with coworkers across the org to document and design how systems work and interact
  • Leads large initiatives across domains, even outside their core expertise. Coordinates large initiatives
  • Designs, architects and implements best-in-class Data Warehousing and reporting solutions
  • Builds real-time data and reporting solutions
  • Develops new systems and tools to enable the teams to consume and understand data more intuitively

Minimum Qualifications:

  • 10+ years experience in data engineering with data warehouse technologies
  • 10+ years experience in custom ETL design, implementation and maintenance
  • 10+ years experience with schema design and dimensional data modeling
  • Experience building real-time data solutions and processes
  • Advanced skills with Python and SQL are a must
  • Experience and expertise in Databricks, Spark, Hadoop etc.
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
  • Strong computer science fundamentals including data structures and algorithms
  • Strong software engineering skills in any server side language, preferable Python
  • Experienced in working collaboratively across different teams and departments
  • Strong technical and business communication skills

Preferred Qualifications:

  • Kafka, HDFS, Hive, Cloud computing, machine learning, LLMs, NLP & Web development experience is a plus
  • NoSQL experience a plus
  • Deep knowledge of Apache Airflow
  • Expert experience implementing complex, enterprise-wide data transformation and processing solutions
  • Experience with Continuous integration and deployment
  • Knowledge and experience of financial markets, banking or exchanges
  • Web development skills with HTML, CSS, or JavaScript
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $172,000 - $215,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-AH1

Apply for this job

22d

Java Solution Architect (Inside IR35 Contract)

Version1London, United Kingdom, Remote
airfloworacleDesignapijavapythonAWS

Version1 is hiring a Remote Java Solution Architect (Inside IR35 Contract)

Job Description

Java Solution Architect

MUST BE BASED WITHIN 50 MILES OF EDINBURGH, LONDON, BIRMINGHAM, MANCHESTER, NEWCASTLE, DUBLIN, OR BELFAST

REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE.

Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries?

How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle EMEA Partner Awards), Global Microsoft Modernising Applications Partner of the Year (2023) and AWS Collaboration Partner of the Year (2023) who would give you the opportunity to undertake accreditations and educational assistance for courses relevant to your role?

Here at Version 1, we are currently in the market for experienced Java Solution Architect to join our growing Digital, Data & Cloud Practice.

You will have the opportunity to work with the latest technology and worked on projects across a multiplicity of sectors and industries.

Java Solution Architect

Job Description

You will be:

  • Leading the development of Java and Python development projects.
  • Designing and develop API integrations using Spark.
  • Collaborating with clients and internal teams to understand business requirements and translate them into HLD and LLD solutions.
  • Defining the architecture and technical design.
  • Designing data flows and integrations using Hadoop.
  • Working with the product team and testers to implement throughout testing.
  • Creating and developing comprehensive documentation, including solution architecture, design, and user guides.
  • Providing training and support to end-users and client teams.
  • Staying up to date with the latest trends and best practices, and share knowledge with the team.

Qualifications

You will have expertise within the following:

  • Java, Python, Spark, Hadoop (Essential)
  • Trino, Airflow (Desirable)
  • Architecture and capabilities.
  • Designing and implementing complex solutions with a focus on scalability and security.
  • Excellent communication and collaboration skills.

Apply for this job

26d

Senior Fullstack Engineer

carsalesSydney, Australia, Remote
terraformairflowvuetypescriptangularbackendfrontendNode.js

carsales is hiring a Remote Senior Fullstack Engineer

Job Description

What you will do

We're hiring a Senior Full-stack Engineer (frontend focus) who will join a team of talented developers, continuing to work with our wider Tech and Product teams and clients. 

This is an exciting role, which will include:

  • You'll work in cross-functional full-stack team which prioritises software craftsmanship.
  • You'll have opportunities to work on all aspects of the product, including frontend (Vue.JS), backend (Nest.JS) , CI/CD (CircleCI), Cloud (Terraform + GCP), Data Engineering (Airflow, BigQuery, Apache Beam)
  • Your work will have a real impact on the business and our clients, including Weatherzone, the NRL, and OzBargain.

Qualifications

What you bring to the role

  • A good understanding of frontend engineering
  • An interest in User Experience, and a willingness to understand what the end-user is trying to achieve
  • Proficient with Typescript
  • Strong knowledge of Node.js REST APIs to read and understand the code, and make basic changes.
  • experience with modern frontend frameworks (Vue, React, Angular, etc) is essential.
  • Professional experience with Vue will be a significant advantage.

See more jobs at carsales

Apply for this job

27d

Data Engineering Intern - Graduate

TubiSan Francisco, CA; Remote
terraformscalaairflowsqljavac++python

Tubi is hiring a Remote Data Engineering Intern - Graduate

Join Tubi (www.tubi.tv), Fox Corporation's premium ad-supported video-on-demand (AVOD) streaming service leading the charge in making entertainment accessible to all. With over 200,000 movies and television shows, including a growing library of Tubi Originals, 200+ local and live news and sports channels, and 455 entertainment partners featuring content from every major Hollywood studio, Tubi gives entertainment fans an easy way to discover new content that is available completely free. Tubi's library has something for every member of our diverse audience, and we're committed to building a workforce that reflects that diversity. We're looking for great people who are creative thinkers, self-motivators, and impact-makers looking to help shape the future of streaming.

About the Role:

At Tubi, data plays a vital role in keeping viewers engaged and the business thriving. Every day, data engineering pipelines analyze the massive amount of data generated by millions of viewers, turning it into actionable insights. In addition to processing TBs a day of 1st party user activity data, we manage a petabyte scale data lake and data warehouses that several hundred consumers use daily. We have two openings on two different teams.

Core Data Engineering (1):In this role, you will join a team focused on Core Data Engineering, helping build and analyze business-critical datasets that fuel Tubi's success as a leading streaming platform.

  • Use SQL and SQL modeling to interact with and create massive sets of data
  • Use DBT and its semantic modeling concept to build production data models
  • Use Databricks as a data warehouse and computing platform
  • Use Python/Scala in notebooks to interact with and create large datasets

Streaming Analytics (1):In this role you will join a small and nimble team focused on Streaming Analytics that power our core and critical datasets for machine learning, helping improve the data quality that fuels Tubi's success as a leading streaming platform.

  • Use SQL to explore and analyze the data quality of our most critical datasets, working with different technical stakeholders across ML & data science 
  • Work with engineers to implement a near-time data quality dashboard
  • Use Python/Scala in notebooks to transform and explore large datasets
  • Use tools like Airflow for workflow management and Terraform for cloud infrastructure automation

Qualifications: 

  • Fluency (intermediate) in one major programming language (preferably Python, Scala, or Java) and SQL (any variant)
  • Familiar with big data technologies (e.g., Apache Spark, Kafka) is a plus
  • Strong communication skills and a desire to learn!

Program Eligibility Requirements:

  • Must be actively enrolled in an accredited college or university and pursuing an undergraduate or graduate degree during the length of the program
  • Current class standing of sophomore (second-year college student) or above
  • Strong academic record (minimum cumulative 3.0 GPA)
  • Committed and available to work for the entire length of the program

About the Program:

  • Application Deadline: April 19, 2024 
  • Program Timeline: 10-week placement beginning on6/17
  • Weekly Hours: Up to 40 hours per week (5 days)
  • Worksite:  Remote or Hybrid (SF or LA)

Pursuant to state and local pay disclosure requirements, the pay range for this role, with the final offer amount dependent on education, skills, experience, and location, is listed per hour below.

California, Colorado, New York City, Westchester County, NY, and Washington
$40$40 USD

Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

  • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
  • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
  • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
  • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

See more jobs at Tubi

Apply for this job

28d

Principal Data Engineer

Procore TechnologiesBangalore, India, Remote
scalanosqlairflowDesignazureUXjavadockerpostgresqlkubernetesjenkinspythonAWS

Procore Technologies is hiring a Remote Principal Data Engineer

Job Description

We’re looking for a Principal Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Principal Data Engineer, you’ll use your expert-level technical skills to craft innovative solutions while influencing and mentoring other senior technical leaders. To be successful in this role, you’re passionate about distributed systems, including caching, streaming, and indexing technologies on the cloud, with a strong bias for action and outcomes. If you’re an inspirational leader comfortable translating vague problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This position reports to the Senior Manager, Reporting and Analytics. This position can be based in our Bangalore, Pune, office or work remotely from a India location. We’re looking for someone to join us immediately.

What you’ll do: 

  • Design and build the next-generation data platform for the construction industry
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Contribute to setting standards and development principles across multiple teams and the larger organization
  • Stay connected with other architectural initiatives and craft a data platform architecture that supports and drives our overall platform
  • Provide technical leadership to efforts around building a robust and scalable data pipeline to support billions of events
  • Help identify and propose solutions for technical and organizational gaps in our data pipeline by running proof of concepts and experiments working with Data Platform Engineers on implementation
  • Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in the data space to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools

What we’re looking for: 

  • Bachelor’s degree in Computer Science, a similar technical field of study, or equivalent practical experience is required; MS or Ph.D. degree in Computer Science or a related field is preferred
  • 10+ years of experience building and operating cloud-based, highly available, and scalable online serving or streaming systems utilizing large, diverse data sets in production
  • Expertise with diverse data technologies like Databricks, PostgreSQL, GraphDB, NoSQL DB, Mongo, Cassandra, Elastic Search, Snowflake, etc.
  • Strength in the majority of commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Airflow, Kubernetes, Docker, Argo, Jenkins, or similar
  • Expertise with all aspects of data systems, including ETL, aggregation strategy, performance optimization, and technology trade-off
  • Understanding of data access patterns, streaming technology, data validation, data modeling, data performance, cost optimization
  • Experience defining data engineering/architecture best practices at a department and organizational level and establishing standards for operational excellence and code and data quality at a multi-project level
  • Strong passion for learning, always open to new technologies and ideas
  • AWS and Azure experience is preferred

Qualifications

See more jobs at Procore Technologies

Apply for this job

28d

Staff Data Engineer

Procore TechnologiesBangalore, India, Remote
scalaairflowsqlDesignUXjavakubernetespython

Procore Technologies is hiring a Remote Staff Data Engineer

Job Description

We’re looking for a Staff Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Staff Data Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the movement, transformation, and retrieval of data for use in analytics, machine learning, and service integration. To be successful in this role, you’re passionate about distributed systems including storage, streaming, and batch data processing technologies on the cloud, with a strong bias for action and outcomes. If you’re a seasoned data engineer comfortable and excited about building our next-generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This is a full-time position and will report to our Senior Manager of Software Engineering and will be based in the India office, but employees can choose to work remotely. We are looking for someone to join our team immediately.

What you’ll do: 

  • Participate in the design and implementation of our next-generation data platform for the construction industry
  • Define and implement operational and dimensional data models and transformation pipelines to support reporting and analytics
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Understand our current data models and infrastructure, proactively identify areas for improvement, and prescribe architectural recommendations with a focus on performance and accessibility. 
  • Work alongside our Product, UX, and IT teams, leveraging your expertise in the data space to influence our product roadmap, developing innovative solutions that add additional value to our platform
  • Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
  • Stay up to date with the latest data technology trends

What we’re looking for: 

  • Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience 
  • 8+ years of experience building and operating cloud-based, highly available, and scalable data platforms and pipelines supporting vast amounts of data for reporting and analytics
  • 2+ years of experience building data warehouses in Snowflake or Redshift
  • Hands-on experience with MPP query engines like Snowflake, Presto, Dremio, and Spark SQL
  • Expertise in relational, dimensional data modeling.
  • Understanding of data access patterns, streaming technology, data validation, performance optimization, and cost optimization
  • Strength in commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Flink, Airflow, Kubernetes, or similar
  • Strong passion for learning, always open to new technologies and ideas

Qualifications

See more jobs at Procore Technologies

Apply for this job

28d

Software Engineer - Infrastructure Platforms

CloudflareAustin or Remote US
airflowpostgressqlDesignansibledockerpostgresqlmysqlkuberneteslinuxpython

Cloudflare is hiring a Remote Software Engineer - Infrastructure Platforms

About Us

At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Available Locations: Remote - US, Mexico City - Mexico, Ontario-  Canada.

About the Role

An engineering role at Cloudflare provides an opportunity to address some big challenges, at scale.  We believe that with our talented team, we can solve some of the biggest security, reliability and performance problems facing the Internet. Just how big?  

  • We have in excess of 15 Terabits of network transit capacity
  • We operate 250 Points-of-presence around the world
  • We serve more traffic than Twitter, Amazon, Apple, Instagram, Bing, & Wikipedia combined
  • Anytime we push code, it immediately affects over 200 million internet users
  • Every day, up to 20,000 new customers sign-up for Cloudflare service
  • Every week, the average Internet user touches us more than 500 times

We are looking for talented Software Engineers to build and develop the platform which makes Cloudflare customers place their trust in us.  Our Software Engineers come from a variety of technical backgrounds and have built up their knowledge working in different environments. But the common factors across all of our reliability-focused engineers include a passion for automation, scalability, and operational excellence.  Our Infrastructure Engineering team focuses on the automation to scale our infrastructure.

Our team is well-funded and focused on building an extraordinary company.  This is a superb opportunity to join a high-performing team and scale our high-growth network as Cloudflare’s business grows.  You will build tools to constantly improve our scale and speed of deployment.  You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale.   

Infrastructure Platforms Software Engineers inside our Resiliency organization focus on building and maintaining the reliable and scalable underlying platforms that act as sources of truth and foundations for automation of Cloudflare’s hardware, network, and datacenter infrastructure. We interface with SRE, Network Engineering, Datacenter Engineering and other Infrastructure and Reliability teams to ensure their ongoing needs are met by the platforms we provide.

Many of our Software Engineers have had the opportunity to work at multiple offices on interim and long-term project assignments. The ideal Software Engineering candidate has a passionate curiosity about how the Internet fundamentally works and has a strong knowledge of Linux and Hardware.  We require strong coding ability in Rust and Python. We prefer to hire experienced candidates; however raw skill trumps experience and we welcome strong junior applicants.

 

Required Skills

  • Intermediate level software development skills in Rust and Python
  • Linux systems administration experience
  • 5 years of relevant software development experience
  • Strong skills in network services and Rest APIs
  • SQL databases (Postgres or MySQL)
  • Self-starter; able to work independently based on high-level requirements

 

Examples of desirable skills, knowledge and experience

  • 5 years of relevant work experience
  • Prior experience working with Diesel and common database patterns in Rust
  • Configuration management systems such as Saltstack, Chef, Puppet or Ansible
  • Prior experience working with datacenter infrastructure automation at scale
  • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache
  • The ability to understand service metrics and visualize them using Grafana and Prometheus
  • Key/Value stores (Redis, KeyDB, CouchBase, KyotoTycoon, Cassandra, LevelDB)

Bonus Points

  • Experience with programming languages other than those listed in requirements.
  • Network fundamentals DHCP, subnetting, routing, firewalls, IPv6
  • Experience with continuous integration and deployment pipelines
  • Performance analysis and debugging with tools like perf, sar, strace, gdb, dtrace, strace
  • Experience developing systems that are highly available and redundant across regions
  • Experience with the Linux kernel and Linux software packaging
  • Internetworking and BGP

Some tools that we use

  • Rust
  • Python
  • Diesel
  • Actix
  • Tokio
  • Apache Airflow 
  • Salt
  • Netbox
  • Docker
  • Kubernetes
  • Nginx
  • PostgreSQL
  • Redis
  • Prometheus

Compensation

Compensation may be adjusted depending on work location.

  • For Colorado-based hires: Estimated annual salary of $137,000 - $152,000
  • For New York City, Washington, and California (excluding Bay Area) based hires: Estimated annual salary of $154,000- $171,000.
  • For Bay Area-based hires: Estimated annual salary of $162,000 - $180,000

Equity

This role is eligible to participate in Cloudflare’s equity plan.

Benefits

Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

Health & Welfare Benefits

  • Medical/Rx Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Accounts
  • Commuter Spending Accounts
  • Fertility & Family Forming Benefits
  • On-demand mental health support and Employee Assistance Program
  • Global Travel Medical Insurance

Financial Benefits

  • Short and Long Term Disability Insurance
  • Life & Accident Insurance
  • 401(k) Retirement Savings Plan
  • Employee Stock Participation Plan

Time Off

  • Flexible paid time off covering vacation and sick leave
  • Leave programs, including parental, pregnancy health, medical, and bereavement leave

 

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

29d

Senior Data Scientist - Support

SquareSan Francisco, CA, Remote
Bachelor degreetableauairflowsqlDesignpython

Square is hiring a Remote Senior Data Scientist - Support

Job Description

The Cash App Support organization is growing and we are looking for a Data Scientist (DS) to join the team. The DS team at Cash derives valuable insights from our extremely unique datasets and turn those insights into actions that improve the experience for our customers every day. In this role, you’ll be embedded in our Support org and work closely with operations and other cross-functional partners to drive meaningful change for how our customers interact with the Support team and resolve issues with their accounts. 

You will:

  • Partner directly with a Cash App customer support team, working closely with operations, engineers, and machine learning
  • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to the operations team and other key stakeholders
  • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand advocate and customer behavior
  • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
  • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
  • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
  • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
  • Build and share data visualizations and self-serve dashboards for your partners
  • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

Qualifications

You have:

  • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in customer support problems would be great to have
  • A bachelor degree in statistics, data science, or similar STEM field with 5+ years of experience in a relevant role OR
  • A graduate degree in statistics, data science, or similar STEM field with 2+ years of experience in a relevant role
  • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
  • Experience with scripting and data analysis programming languages, such as Python or R
  • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities
  • Experience in a high-growth tech environment

Technologies we use and teach:

  • SQL, Snowflake, etc.
  • Python (Pandas, Numpy)
  • Looker, Mode, Tableau, Prefect, Airflow

See more jobs at Square

Apply for this job

29d

Staff Data Scientist - Sales & Account Management

SquareSan Francisco, CA, Remote
Bachelor degreetableauairflowsqlDesignpython

Square is hiring a Remote Staff Data Scientist - Sales & Account Management

Job Description

The Cash App Data Science (DS) organization is growing and we are looking for a Data Scientist to join the team, embedded within our Sales and Account Management domain. You will be responsible for deriving valuable insights from our extremely unique datasets as well as developing models, forecasts, analyses, reports to help achieve merchant acquisition, retention, growth and profitability goals.

You will:

  • Partner directly with the Cash App Sales & AM team, working closely with operations, strategy, engineers, account executives/managers and leads
  • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to key stakeholders
  • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand merchant behavior
  • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
  • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
  • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
  • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
  • Build and share data visualizations and self-serve dashboards for your partners
  • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

Qualifications

You have:

  • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in marketplace platforms specially on the merchant side, would be great to have.
  • A bachelor degree in statistics, data science, or similar STEM field with 8+ years of experience in a relevant role OR
  • A graduate degree in statistics, data science, or similar STEM field with 6+ years of experience in a relevant role
  • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
  • Experience with scripting and data analysis programming languages, such as Python or R
  • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities

Technologies we use and teach:

  • SQL, Snowflake, etc.
  • Python (Pandas, Numpy)
  • Looker, Mode, Tableau, Prefect, Airflow

See more jobs at Square

Apply for this job

29d

Java Solution Architect

Version1Málaga, Spain, Remote
airfloworacleDesignapijavapythonAWS

Version1 is hiring a Remote Java Solution Architect

Job Description

Java Solution Architect

MUST BE BASED WITHIN 50 MILES OF EDINBURGH, LONDON, BIRMINGHAM, MANCHESTER, NEWCASTLE, DUBLIN, OR BELFAST

REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE.

Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries?

How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle EMEA Partner Awards), Global Microsoft Modernising Applications Partner of the Year (2023) and AWS Collaboration Partner of the Year (2023) who would give you the opportunity to undertake accreditations and educational assistance for courses relevant to your role?

Here at Version 1, we are currently in the market for experienced Java Solution Architect to join our growing Digital, Data & Cloud Practice.

You will have the opportunity to work with the latest technology and worked on projects across a multiplicity of sectors and industries.

Java Solution Architect

Job Description

You will be:

  • Leading the development of Java and Python development projects.
  • Designing and develop API integrations using Spark.
  • Collaborating with clients and internal teams to understand business requirements and translate them into HLD and LLD solutions.
  • Defining the architecture and technical design.
  • Designing data flows and integrations using Hadoop.
  • Working with the product team and testers to implement throughout testing.
  • Creating and developing comprehensive documentation, including solution architecture, design, and user guides.
  • Providing training and support to end-users and client teams.
  • Staying up to date with the latest trends and best practices, and share knowledge with the team.

Qualifications

You will have expertise within the following:

  • Java, Python, Spark, Hadoop (Essential)
  • Trino, Airflow (Desirable)
  • Architecture and capabilities.
  • Designing and implementing complex solutions with a focus on scalability and security.
  • Excellent communication and collaboration skills.

Apply for this job

30d

(Senior) Python Engineer, Data Group

WoltStockholm, Sweden, Remote
airflowkubernetespython

Wolt is hiring a Remote (Senior) Python Engineer, Data Group

Job Description

Data at Wolt

As the scale of Wolt has rapidly grown, we are introducing new users to our data platform every day and want this to become a coherent and streamlined experience for all users, whether they’re Analysts, Data Scientists working with our data or teams bringing new data to the platform from their applications. We aim to both provide new platform capabilities across batch, streaming, orchestration and data integration to serve our user’s needs, as well as building an intuitive interface for them to solve their use cases without having to learn the details of the underlying tools.

In the context of this role we are hiring an experienced Senior Software Engineer to provide technical leadership and individual contribution in one the following workstreams:

Data Governance

Wolt’s Data Group has already developed an initial foundational tooling in the areas of data management, security, auditing, data catalog and quality monitoring, but through your technical contributions you will ensure our Data Governance tooling is state of the art. You’ll be improving the current Data Governance platform, making sure it can be further integrated with the rest of the Data Platform and Wolt Services in a scalable, secure, compliant way, without significant disruptions to the teams. 

Data Experience

We want to ensure our Analysts, Data Scientists, and Engineers can discover, understand, and publish high-quality data at scale. We have recently released a new data platform tool which enables simple, yet powerful creation of workflows via a declarative interface. You will help us ensure our users succeed in their work with effective and polished user experiences by developing our internal user-facing tooling and curating our documentation to the highest standards. And what's best, you get to work closely with excited users to get continuous feedback about released features while supporting and onboarding them to new workflows.

Data Lakehouse

We recently started this workstream to manage data integration, organization, and maintenance of our new Iceberg based data lakehouse architecture. Together, we build and maintain ingestion pipelines to efficiently gather data from diverse sources, ensuring seamless data flow. We create and manage workflows to transform raw data into structured formats, guaranteeing data quality and accessibility for analytics and machine learning purposes.

At the time you’ll join we’ll match you with one of these work streams based on our needs and your skills, experience and preferences.

How we work

Our teams have a lot of autonomy and ownership in how they work and solve their challenges. We value collaboration, learning from each other and helping each other out to achieve the team’s goals. We create an environment of trust, in which everyone’s ideas are heard and where we challenge each other to find the best solutions. We have empathy towards our users and other teams. Even though we’re working in a mostly remote environment these days, we stay connected and don’t forget to have fun together building great software!

Our tech stack

Our primary programming language of choice is Python. We deploy our systems in Kubernetes and AWS. We use Datadog for observability (logging and metrics). We have built our data warehouse on top of Snowflake and orchestrate our batch processes with Airflow and Dagster. We are heavy users of Kafka and Kafka Connect. Our CI/CD pipelines rely on GitHub actions and Argo Workflows.

Qualifications

The vast majority of our services, applications and data pipelines are written in Python, so several years of having shipped production quality software in high throughput environments written in Python is essential. You should be very comfortable with typing, dependency management, unit-, integration- and end-to-end tests. If you believe that software isn’t just a program running on a machine, but the solution to someone’s problem, you’re in the right place.

Having previous experience in planning and executing complex projects that touch multiple teams/stakeholders and run across a whole organization is a big plus.Good communication and collaboration skills are essential, and you shouldn’t shy away from problems, but be able to discuss them in a constructive way with your team and the Wolt Product team at large.

Familiarity with parts of our tech stack is definitely a plus, but we hire for attitude and ability to learn over knowing a specific technology that can be learned.

The tools we are building inside of the data platform ultimately serve our many stakeholders across the whole company, whether they are Analysts, Data Scientists or engineers in other teams that produce or consume data. 

We want all of our users to love the tools we’re building and that is why we want you to focus on building intuitive and user friendly applications that enable everyone to use and work with data at Wolt.

See more jobs at Wolt

Apply for this job

+30d

Data Engineer PySpark AWS

2 years of experienceagileBachelor's degreejiraterraformscalaairflowpostgressqloracleDesignmongodbjavamysqljenkinspythonAWS

FuseMachines is hiring a Remote Data Engineer PySpark AWS

Data Engineer PySpark AWS - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

+30d

Senior Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Senior Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant son expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur GCP, en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des requêtes SQL et des processus ETL pour garantir des temps de réponse rapides et une scalabilité.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Restez à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

Qualifications

  • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
  • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
  • Certification GCP (Google Cloud Platform) est un plus.
  • Très bonne communication écrite et orale (livrables et reportings de qualité)

See more jobs at Devoteam

Apply for this job

+30d

Machine Learning Engineer (All Genders)

DailymotionParis, France, Remote
airflowDesigndockerpython

Dailymotion is hiring a Remote Machine Learning Engineer (All Genders)

Job Description

Joining the Dailymotion data team means taking part in the creation of our unique algorithms, designed to bring more diversity and nuance to online conversations.

Our Machine Learning team, established in 2016, has been actively involved in developing models across a diverse range of topics. Primarily, we focus on recommender systems, and extend our expertise to content classification, moderation, and search functionalities.

You will be joining a seasoned and diverse team of Senior Machine Learning Engineers, who possess the capability to independently conceptualize, deploy, A/B test, and monitor their models.

We collaborate closely with the Data Product Team, aligning our efforts to make impactful, data-driven decisions for our users.

Learn more about our ongoing projects:https://medium.com/dailymotion

As a Machine Learning Engineer, you will:

  • Design and deploy scalable recommender systems, handling billions of user interactions and hundreds of millions of videos.
  • Contribute to various projects spanning machine learning domains, encompassing content classification, moderation, and ad-tech.
  • Foster autonomy, taking ownership of your scope, and actively contribute ideas and solutions. Maintain and monitor your models in production.
  • Collaborate with cross-functional teams throughout the entire machine learning model development cycle:
    • Define success metrics in collaboration with stakeholders.
    • Engage in data collection and hypothesis selection with the support of the Data Analysts Team.
    • Conduct machine learning experiments, including feature engineering, model selection, offline validation, and A/B Testing.
    • Manage deployment, orchestration, and maintenance on cloud platforms with the Data Engineering Team.

Qualifications

  • Master's degree/PhD in Machine Learning, Computer Science, or a related quantitative field.
  • At least 1 year of professional experience working with machine learning models at scale (experience with recommender systems is a plus).
  • Proficient in machine learning concepts, with the ability to articulate theoretical concepts effectively.
  • Strong coding skills in Python & SQL.
  • Experience in building production ML systems; familiarity with technologies such as GitHub, Docker, Airflow, or equivalent services provided by GCP/AWS/Azure.
  • Experience with distributed frameworks is advantageous (Dataflow, Spark, etc.).
  • Strong business acumen and excellent communication skills in both English and French (fluent proficiency).
  • Demonstrated aptitude for autonomy and proactivity is highly valued.

See more jobs at Dailymotion

Apply for this job

+30d

Staff Site Reliability Engineer

MozillaRemote US
6 years of experienceterraformairflowsqlDesignansibleazurejavac++openstackdockerelasticsearchkubernetesjenkinspythonAWSbackendNode.js

Mozilla is hiring a Remote Staff Site Reliability Engineer


Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than225million people around the world using our products each month, we’re shaping the next 25 years of technology. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for everyone. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with thousands of volunteer contributors and collaborators all over the world, Mozillians design, build and distributeopen-sourcesoftware that enables people to enjoy the internet on their terms. 

About this team and role:

Mozilla’s Release SRE Team is looking for a Staff SRE to help us build and maintain infrastructure that supports Mozilla products. You will combine skills from DevOps/SRE, systems administration, and software development to influence product architecture and evolution by crafting reliable cloud-based infrastructure for internal and external services.

As a Staff SRE you will work closely with Mozilla’s engineering and product teams and participate in significant engineering projects across the company. You will collaborate with hardworking engineers across different levels of experience and backgrounds. Most of your work will involve improving existing systems, building new infrastructure, evaluating tools and eliminating toil.

What you’ll do:

  • Manage infrastructure in AWS and GCP
  • Write, maintain, and expand automation scripts, metrics and monitoring tooling, and orchestration recipes
  • Lead otherSREs and software development teams to deliver products with an eye on reliability and automation
  • Demonstrate accountability in the delivery of work
  • Spot and raise potential issues to the team
  • Be on-call for production services and infrastructure
  • Be trusted to resolve unclear but urgent tasks
What you’ll bring:
  • Degree and 6 years of experience related to either backend software development or cloud operations or experience related DevOps/SRE
  • Experience programming in at least one of the following languages: Python, Java, C/C++, Go, Node.js or Rust. 
  • Involvement in running services in the cloud
  • Kubernetes administration and optimization
  • Proven understanding of database systems (SQL and/or non-relational databases)
  • Infrastructure As Code and Configuration as Code tooling (Puppet, Chef, Ansible, Salt, Terraform, Amazon Cloudformation or Google Cloud Deployment Manager)
  • Strong communication skills
  • Curiosity and interest in learning new things
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit
Bonus points for…
  • CI/CD orchestration (Jenkins, CircleCI, or TravisCI)
  • ETL, data modeling, cloud-based data storage, processing
  • GCP Data Services (Dataflow, BigQuery, Dataproc)
  • Workflow and data pipeline orchestration (Airflow, Oozie, Jenkins, etc)
  • Container orchestration technologies (Kubernetes, OpenStack, Docker swarm, etc)
  • Open source software involvement
  • Monitoring/Logging with technologies like Splunk, ElasticSearch, Logstash/Fluentd, Stackdriver, Time-series databases like InfluxDB etc.

What you’ll get:

  • Generous performance-based bonus plans to all regular employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

Mozilla exists to build the Internet as a public resource accessible to all because we believe that open and free is better than closed and controlled. When you work at Mozilla, you give yourself a chance to make a difference in the lives of Web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the Web as the platform and help create more opportunity and innovation for everyone online.

Commitment to diversity, equity, inclusion, and belonging

Mozilla understands that valuing diverse creative practices and forms of knowledge are crucial to and enrich the company’s core mission.  We encourage applications from everyone, including members of all equity-seeking communities, such as (but certainly not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations,gender identities, and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws.  Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.

Group: C

#LI-REMOTE

Req ID: R2515

Hiring Ranges:

US Tier 1 Locations
$163,000$239,000 USD
US Tier 2 Locations
$150,000$220,000 USD
US Tier 3 Locations
$138,000$203,000 USD

See more jobs at Mozilla

Apply for this job

+30d

Senior Data Engineer (Data Competency Center)

Sigma SoftwareWarsaw, Poland, Remote
tableaunosqlairflowsqlpython

Sigma Software is hiring a Remote Senior Data Engineer (Data Competency Center)

Job Description

  • Pre-sales collaboration: Collaborating with solution architects, project managers, and business analysts to gather requirements, perform investigations, and provide estimations for potential projects
  • Project initiation: Taking the lead in driving new projects and being a key driver in their success
  • Project execution: Becoming a part of the team for one of the opportunities in the pipeline, fulfilling the Data Engineer role
  • Contributing to and spearheading Data Engineering excellence: Researching technology trends and conducting best practices analysis to ensure our solutions remain state-of-the-art

Qualifications

  • Proficiency in building, maintaining, testing, and delivering large-scale data pipelines
  • ETL/ELT expertise: Practical experience with at least one end-to-end solution and its start, development, and maintenance.
  • Proficiency in working with large data extraction, aggregation, and manipulation using a selected database. Experience and strong knowledge of SQL and NoSQL databases. Understanding the pros and cons of different types of databases, experience in data modeling, and database optimizations
  • At least 3-5 years of proficiency in Python or Skala for data processing and transformation
  • Hands-on experience with main frameworks and libraries in Data Engineering domains: Spark, Hive, Kafka, Airflow, Flink, etc., with a proven record of debugging and optimization experience
  • Experience with CI/CD in data engineering
  • Experience with cloud-based data processing solutions

WILL BE A PLUS:

  • Knowledge of K8s orchestrations
  • Exposure to OLAP tools like Tableau, Qlik, Grafana, or similar
  • Databricks experience/certification

See more jobs at Sigma Software

Apply for this job

+30d

Data Manager

remote-firsttableauairflowsqlpython

Parsley Health is hiring a Remote Data Manager

About us:

Parsley Health is a digital health company with a mission to transform the health of everyone, everywhere with the world's best possible medicine. Today, Parsley Health is the nation's largest health care company helping people suffering from chronic conditions find relief with root cause resolution medicine. Our work is inspired by our members’ journeys and our actions are focused on impact and results.

The opportunity:

We’re hiring an experienced Manager of Data to drive the data strategy for Parsley Health: by championing quality data across the organization and leading functions for data science and analytics along with data engineering.

This person should have knowledge of the healthcare space, specifically related to health outcomes and benchmarks and will report into the Chief Technology Officer.

What you’ll do:

  • Passionate about our mission to live healthier through revolutionary primary care, excited for the future of healthcare, and a personal belief in wellness.
  • Collaborate on strategic direction with the leadership team and executives to evolve our mid and long term roadmap
  • Hands-on manager who will write code and has experience in a variety of different systems and architecture, analysis, and presentation. 
  • Support identifying clinical outcomes and publishing papers with the clinical team and SVP of Clinical Operations.
  • Empower high quality product decisions through data analysis.
  • Develop machine learning models to better assist our members’ health care needs.
  • Foster a strong culture of data-driven decision making through training and mentorship within your team and across the company.
  • Implement and maintain a world-class data stack that empowers data consumers with reliable, accessible, compliant insights.
  • Consult with data consumers to improve their measurement strategies.
  • Manage a team of two members and grow it to a multi disciplinary function within a few years. 

What you’ll need:

  • Experience in building a data strategy for a small team or company. Potentially previously the first data hire at a company (not required). 
  • Proficient in statistical methods.
  • Loves to deep dive into problems and solutioning to identify root causes and be able to extrapolate a big picture strategy or story. 
  • Helps people with their careers while creating and improving upon structures to enable career growth 
  • Sets up processes and governance around project management, data quality, prioritization, etc.
  • Well versed in SQL, at least one scripting language (R, Python, etc.), Excel, and BI platforms (Looker, Tableau, etc.).

Tech stack

  • Python
  • GCP
  • Airflow
  • SQL
  • Looker
  • Dataform (dbt)

Benefits and Compensation:

  • Equity Stake
  • 401(k) + Employer Matching program
  • Remote-first with the option to work from one of our centers in NYC or LA 
  • Complimentary Parsley Health Complete Care membership
  • Subsidized Medical, Dental, and Vision insurance plan options
  • Generous 4+ weeks of paid time off
  • Annual professional development stipend

Parsley Health is committed to providing an equitable, fair and transparent compensation program for all employees.

The starting salary for this role is between $165,750 - $195,000, depending on skills and experience. We take a geo-neutral approach to compensation within the US, meaning that we pay based on job function and level, not location.

Individual compensation decisions are based on a number of factors, including experience level, skillset, and balancing internal equity relative to peers at the company. We expect the majority of the candidates who are offered roles at our company to fall healthily throughout the range based on these factors. We recognize that the person we hire may be less experienced (or more senior) than this job description as posted. If that ends up being the case, the updated salary range will be communicated with candidates during the process.


At Parsley Health we believe in celebrating everything that makes us human and are proud to be an equal opportunity workplace. We embrace diversity and are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe that the more inclusive we are, the better we can serve our members. 


Important note:

In light of recent increase in hiring scams, if you're selected to move onto the next phase of our hiring process, a member of our Talent Acquisition team will reach out to you directly from an@parsleyhealth.comemail address to guide you through our interview process. 

    Please note: 

  • We will never communicate with you via Microsoft Teams
  • We will never ask for your bank account information at any point during the recruitment process, nor will we send you a check (electronic or physical) to purchase home office equipment

We look forward to connecting!

#LI-Remote

See more jobs at Parsley Health

Apply for this job

+30d

Senior AI Scientist(Taiwan)

GOGOXRemote
airflowsqlDesignazureapijavapythonAWS

GOGOX is hiring a Remote Senior AI Scientist(Taiwan)

Senior AI Scientist(Taiwan) - GoGoX - Career Page

See more jobs at GOGOX

Apply for this job

+30d

Manager, Software Engineering - Data Platform

SamsaraCanada - Remote
Master’s DegreeterraformairflowkubernetesAWS

Samsara is hiring a Remote Manager, Software Engineering - Data Platform

Who we are

Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale.

Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, Equipment Monitoring, and Site Visibility. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term. 

Recent awards we’ve won include:

Glassdoor's Best Places to Work 2024

Best Places to Work by Built In 2024

Great Place To Work Certified™ 2023

Fast Company's Best Workplaces for Innovators 2023

Financial Times The Americas’ Fastest Growing Companies 2023

We see a profound opportunity for data to improve the safety, efficiency, and sustainability of operations, and hope you consider joining us on this exciting journey. 

Click hereto learn more about Samsara's cultural philosophy.

About the role:

The Samsara Data Platform team owns and develops the analytic platform across Samsara. As a Manager II of Data Platform, you will build and lead teams that maintain our data lake and surrounding infrastructure. You will also be responsible for meeting new business needs, including expanding the platform as the company grows (both in size and geographic coverage), privacy and security needs, and customer-facing feature developments.

You should apply if:

  • You want to impact the industries that run our world: The software, firmware, and hardware you build will result in real-world impact—helping to keep the lights on, get food into grocery stores, and most importantly, ensure workers return home safely.
  • You want to build for scale: With over 2.3 million IoT devices deployed to our global customers, you will work on a range of new and mature technologies driving scalable innovation for customers across industries driving the world's physical operations.
  • You are a life-long learner: We have ambitious goals. Every Samsarian has a growth mindset as we work with a wide range of technologies, challenges, and customers that push us to learn on the go.
  • You believe customers are more than a number:Samsara engineers enjoy a rare closeness to the end user and you will have the opportunity to participate in customer interviews, collaborate with customer success and product managers, and use metrics to ensure our work is translating into better customer outcomes.
  • You are a team player: Working on our Samsara Engineering teams requires a mix of independent effort and collaboration. Motivated by our mission, we’re all racing toward our connected operations vision, and we intend to win—together.

Click hereto learn about what we value at Samsara. 

In this role, you will: 

  • Lead a team of data-focused engineers to build and maintain a stable, scalable, and modern data platform capable of handling petabytes of data. 
  • Help drive long-term planning and establish scalable processes for execution
  • Actively contribute to building the data roadmap for Samsara
  • Stay connected to novel technological developments that suit Samsara’s needs.
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices
  • Hire, develop and lead an inclusive, engaged, and high-performing international team

Minimum requirements for the role:

  • BS, MS, or PhD in Computer Science or other related technical degree
  • 2+ years of technical people management experience5+ years of relevant technical experience with data infrastructure
  • Experience building and deploying large-scale data platform systems with feedback loops for continuous improvement
  • Comfortable leading infrastructure development in collaboration with cross functional teams, scientists, and researchers

An ideal candidate also has:

  • MS or PhD in Computer Science or other technical degree
  • Experience with state-of-art data platform technologies such as:
    • AWS (S3 and RDS, SQS, DMS, Dynamo, etc.)
    • Spark a must, Flink, Trino/Presto a plus
    • Data lake file formats such as Delta, Hudi, or Iceberg
    • Python/Scala
    • Container based orchestration services such as Kubernetes, ECS, Fargate, etc.
    • Infrastructure as Code tools (e.g., Terraform)
    • Go is a plus
    • Data orchestration system experience is a plus (e.g., Airflow, Dagster)
  • Proven track record for innovation and delivering value to customers (both internal and external).
  • Demonstrated ability to build cross-functional consensus and drive cross-team collaboration

Samsara’s Compensation Philosophy:Samsara’s compensation program is designed to deliver Total Direct Compensation (based on role, level, and geography) that is at or above market. We do this through our base salary + bonus/variable + restricted stock unit awards (RSUs) for eligible roles.  For eligible roles, a new hire RSU award may be awarded at the time of hire, and additional RSU refresh grants may be awarded annually. 

We pay for performance, and top performers in eligible roles may receive above-market equity refresh awards which allow employees to achieve higher market positioning.

The range of annual base salary for full-time employees for this position is below. Please note that base pay offered may vary depending on factors including your city of residence, job-related knowledge, skills, and experience.
$142,800$184,800 CAD

At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems. We are committed to increasing diversity across our team and ensuring that Samsara is a place where people from all backgrounds can make an impact.

Benefits

Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, Samsara for Good charity fund, and much, much more. Take a look at our Benefits site to learn more.

Accommodations 

Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click hereif you require any reasonable accommodations throughout the recruiting process.

Flexible Working 

At Samsara, we embrace a flexible working model that caters to the diverse needs of our teams. Our offices are open for those who prefer to work in-person and we also support remote work where it aligns with our operational requirements. For certain positions, being close to one of our offices or within a specific geographic area is important to facilitate collaboration, access to resources, or alignment with our service regions. In these cases, the job description will clearly indicate any working location requirements. Our goal is to ensure that all members of our team can contribute effectively, whether they are working on-site, in a hybrid model, or fully remotely. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company and in the specified work location, if applicable.

Fraudulent Employment Offers

Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in ‘@samsara.com’ or ‘@us-greenhouse-mail.io’. For more information regarding fraudulent employment offers, please visit our blog post here.

Apply for this job

+30d

Senior Business Intelligence Engineer

SquareSan Francisco, CA, Remote
tableauairflowsqlDesignjavamysqlpython

Square is hiring a Remote Senior Business Intelligence Engineer

Job Description

The BI Team at Cash App enables our teams to make impactful business decisions. Our BI Engineers handle everything from data architecture and modeling to data pipeline tooling and dashboarding. As a Senior BI Engineer at Cash App, you will report to the BI Manager and work with Analysts, Data Scientists, Software Engineers and Product Managers to lay the foundation for analyzing our large, unique dataset. We are an extremely data-driven team - from understanding our customers, managing and operating our business, to informing product development. You will build, curate, document, and manage key datasets and ETLs to increase the impact of the entire team.

You will:

  • Create brand new and optimize existing data models for the most widely used Cash App events, entities, and processes
  • Standardize business and product metric definitions in curated and optimized datasets
  • Build pipelines out of our data warehouse
  • Teach (and encourage) others to self-serve while building tools that make it simpler and faster for them to do so
  • Promote data, analytics, and data model design best practices
  • Create dashboards that help our teams understand the performance of the business and help them make decisions

Qualifications

You have:

  • Background/knowledge in Computer Science, Applied Math, Engineering, Stats, Physics, or a something comparable
  • 5+ years of industry experience building complex, scalable ETLs for a variety of different business and product use cases
  • An interest in advancing Cash App's vision of building products for economic empowerment - this should be something that legitimately excites you

Technologies we use and teach:

  • SQL (MySQL, Snowflake, BigQuery, etc.)
  • Airflow, Looker and Tableau
  • Python and Java

See more jobs at Square

Apply for this job