Data Engineer Remote Jobs

59 Results

1d

Snowflake Data Engineer

ImpervaRemote, Mexico City, Mexico
1 year of experiencetableausqlsalesforceoracleDynamicsDesignazure

Imperva is hiring a Remote Snowflake Data Engineer

Snowflake Data Engineer 

The ability to design, implement, and optimize large-scale data and analytics solutions on Snowflake Cloud Data Warehouse is essential. Expertise with Amazon s3 is a must.

Responsibilities:
A Data Engineer at Snowflake is responsible for:

  • Overall responsibility of managing and maintaining the Snowflake Environment both from Administration and Development stand point.
  • Implementing ELT pipelines within and outside of a data warehouse and Snowflakes Snow SQL
  • Querying Snowflake using SQL, Expert in created complex views and UDF’s.
  • Development ELT Jobs in Talend for extracting, loading, and transforming data.
  • Assist with production issues in Data Warehouses like reloading data, transformations, and translations. Quick in finding issues and debugging experience.
  • Develop a Database Design and Reporting Design by creating complex views, based on Business Intelligence and Reporting requirements
  • A solid understanding of data science concepts will be an additional advantage.
  • Support BI solutions that report on data extracted from CRM and ERP solutions (e.g. Salesforce, CPQ, NetSuite, Oracle Apps, AX Dynamics, Siebel CRM, Oracle E-Business Suite).

Qualifications:

Snowflake Data Engineers are required to have the following qualifications:

  • Minimum of 1 year of experience designing and implementing a full-scale data warehouse solution based on Snowflake.
  • A minimum of three years of experience in developing production-ready data ingestion and processing pipelines using ELT Tools (Talend).
  • Knowledge of Amazon S3 is a must.
  • A solid understanding of data science concepts will be an additional advantage.
  • Data analysis expertise.
  • Working knowledge of ELT tools, Like Talend, Informatica or any other ELT tools.
  • Knowledge of BI tools, like Tableau, Power BI and Qlik Sense.
  • Experience with complex data warehouse solutions on Teradata, Oracle, or DB2 platforms with 2 years of hands-on experience
  • Expertise and excellent proficiency with Snowflake internals and integration of Snowflake with other technologies for data processing and reporting.
  • A highly effective communicator, both orally and in writing
  • Problem-solving and architecting skills in cases of unclear requirements.
  • A minimum of one year of experience architecting large-scale data solutions, performing architectural assessments, examining architectural alternatives, and choosing the best solution in collaboration with both IT and business stakeholders.
  • Extensive experience with Talend, Informatica, and building data ingestion pipelines.
  • Expertise with Amazon Web Services, Microsoft Azure and Google Cloud.
  • Good knowledge of MSSQL from DBA prospective will be an additional advantage.
  • A solid understanding of data science concepts will be an additional advantage.

About Imperva:

Imperva is an analyst-recognized, cybersecurity leader—championing the fight to secure data and applications wherever they reside. Once deployed, our solutions proactively identify, evaluate, and eliminate current and emerging threats, so you never have to choose between innovating for your customers and protecting what matters most. Imperva—Protect the pulse of your business. Learn more: www.imperva.com, our blog, on Twitter.

Rewards:

Imperva offers a competitive compensation package that includes base salary, medical, flexible time off and more. The anticipated base salary range for this position is $124,000 - $186,000. The salary offered will be determined based on the candidate’s experience, knowledge, skills, other qualifications, and location. It’s an exciting time to work in the security space. Check out our products and services at www.imperva.com and career opportunities at www.imperva.com/careers

Legal Notice:

Imperva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, ancestry, pregnancy, age, sexual orientation, gender identity, marital status, protected veteran status, medical condition or disability, or any other characteristic protected by law.

#LI-Remote 

 

See more jobs at Imperva

Apply for this job

1d

AWS Data Engineer (Remote)

Loginsoft Consulting LLCColumbia, MD Remote
nosqlsqlDesignAWS

Loginsoft Consulting LLC is hiring a Remote AWS Data Engineer (Remote)

NOTE: THIS POSITION IS TO JOIN AS W2 ONLY.

AWS Data Engineer

Location: Columbia, MD (REMOTE)

Long-term Contract

SUMMARY:

As a AWS Data Engineer, you will play a supporting role in our data migration from on-prem data stores to a modern AWS cloud-based data and analytics architecture as we embark on a Digital Transformation to ensure we have the proper tools and technology to become an industry leader in the financial sector. You will assist in creating, updating, and managing ETL and data integration processes, presentation layer endpoints, and SQL statements to create both the physical and logical structures for data manipulation within the AWS data architecture. You will work under the guidance of senior data engineers and collaborate with internal stakeholders across the organization in identifying, designing, and implementing internal process improvements that include re-designing data infrastructure for greater scalability, optimizing data delivery, and enhancing effective and efficient data utilization by turning data into insights. You will also help coordinate with application vendors and assist in project planning and implementation. You will assist in interfacing with internal business lines and project management staff, and learn to become the authoritative subject matter expert on data management.

ESSENTIAL JOB FUNCTIONS:

  • Assist in using AWS cloud technologies like Redshift, Glue, EC2, S3, etc to build a scalable cloud data platform.
  • Develop Data Pipelines to build our data lake in AWS, leveraging technologies like EC2, S3, Lambda, Glue, DynamoDB, Redshift, etc.
  • Assist in developing and designing models for complex analytical and data warehouse/mart systems including tasks related to database design, data analysis, data quality, metadata management and support.
  • Assist in presenting cloud data solutions to given use cases, giving guidance on options in architecting AWS data solutions and concisely presenting the pros and cons and alternatives to proposed solutions.
  • Learn about workforce transformation by receiving knowledge transfer, operational guidance and training to team members on AWS cloud data tools and technologies.
  • Assist in providing data and analytical solutions to ensure efficient manipulation, organization and sharing of information, data and/or master data in a data lake, datawarehouse, and data-mart construct using AWS technologies.
  • Assist in collaborating with Cloud Architect to define the appropriate cloud database solutions, such as AWS Redshift, Database Migration Services, Glue, EMR, EC2, S3, Relational Database Service (RDS)/Aurora, and Amazon Kinesis.
  • Assist in defining access patterns, storage methods, and integration patterns for the movement and maintenance of data into unstructured, relational, and NoSQL databases.
  • Assist in collaborating with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
  • Assist in writing and maintaining functional and technical documentation and specifications.
  • Assist in creating project plans, test plans, test data sets and automated testing to ensure all components of the system meet specifications.
  • Provide technical consulting for problem resolution, performance optimizations and data processing questions.
  • Assist in analyzing, defining and documenting system requirements for data, workflow, logical processes, interfaces with other systems, auditing, reporting requirements and production configuration.
  • Assist in overseeing database backup, clustering, mirroring, replication and failover. Evaluate and recommend new database technology, when appropriate.

KNOWLEDGE, SKILLS, AND ABILITIES:

  • Bachelors degree or equivalent.
  • 2+ years of experience working as Data Engineer.
  • Familiarity with AWS data ingestion and warehousing technology.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets using AWS cloud services.
  • Experience in ETL optimization, designing, coding, and tuning big data processes using AWS cloud services is a plus.
  • Familiarity with AWS cloud services: S3, Lambda, Glue, Redshift, Athena is preferred.
  • Familiarity with relational SQL, data warehousing and data modeling concepts.
  • Familiarity with a programming language of choice.
  • Excellent written and verbal communication skills.
  • Proven ability to work in a collaborative team environment.
  • Proven ability to deliver high-quality work in a timely manner.
  • A keen attention to detail and an ability to think strategically.
  • Ability to understand business requirements and translate them into technical solutions.

SPECIFIC PHYSICAL REQUIREMENTS:

  • Work requires reasonable mobility in and around the work area. The ability to use standard computer and phone systems is required. Travel to other Client locations may be required.

WORKING CONDITIONS:

  • Normal office environment where there is almost no discomfort due to temperature, dust, noise, or other disagreeable elements.
  • Work includes little or no potential exposure to hazardous conditions.
  • Must be able to travel to remote company and/or client locations

See more jobs at Loginsoft Consulting LLC

Apply for this job

1d

Data Engineer (OBRIO)

GenesisKyiv, UA Remote
airflowsqlFirebasemobileapigitpostgresqlmysqlkubernetespython

Genesis is hiring a Remote Data Engineer (OBRIO)

Привіт!
Дозволь познайомити тебе з однією із найбільших та найуспішніших компаній, яка входить в екосистему бізнесів Genesis - OBRIO. Наша команда складається з понад 150 талановитих професіоналів, чиї амбіції та прагнення до успіху допомагають нам створювати найкращі продукти на ринку.

Вже протягом більше 3-х років ми створюємо і розвиваємо власні продукти. Наш флагманський продукт - Nebula - астрологічна платформа, яка входить в ТОП 10 кращих додатків категорії Lifestyle і наразі представлена на обох платформах - mobile та web. Дозволь поділитись з тобою деталями:

  • Nebula- №1 в своїй ніші по завантаженням та доходу;
  • 20+ млн. завантажень;
  • 250+ тис. DAU;
  • 4.7 - середня оцінка в сторі (більше 215 тис. оцінок).

В умовах масштабування OBRIO та процесів у команді, у нас з’явилась потреба у Data Engineer,який посилить нашу мобільну команду і проявить свій талант і технічні навички для забезпечення безперебійної роботи з даними завдяки побудові та підтримці нової структури. Приєднавшись до нас, ти зможеш самостійно сформувати повну архітектуру та матимеш можливість впливати на процес побудови ETL. На цій позиції ти найчастіше взаємодіятимеш з нашими аналітиками та back-end командою. Тому привідкриваємо закулісся командного життя й знайомимо тебе з Женею???? Вона доєдналась до нашої команди майже 2 роки тому і залюбки ділиться своєю експертизою, бо у відкритість до нових викликів та шеринг знань we trust????

А що далі? Ділимось, які задачі очікують тебе на позиції Data інженера:

  • Побудова флоу роботи з даними;
  • Речек існуючих ETL процесів, їх оптимізація і постійний апгрейд;
  • Автоматизація хелсчеків та побудова алертингу;
  • Збір та актуалізація документації;
  • Побудова процесу комунікації з аналітиками та передачі даних/датасетів з відповідним рев'ю для оптимізації роботи з базою даних;
  • Побудова архітектури бази даних та її підтримка;
  • Побудова дашбордів для відслідковування якості даних і структури даних;
  • Збір даних з різних сорсів через API (Маркетингові кости/ дані з амплітуди/ і т.д.);
  • Підготовка датасетів для ML моделей.

Стек, з яким ми зараз працюємо:

  • Vertica;
  • PostgreSQL;
  • MySQL;
  • BigQuery;
  • Python;
  • Git;
  • Сервіси, з якими також буде можливість попрацювати: Firebase, Amplitude, AppsFlyer, Google Analytics.

Що буде нашим green flag при розгляді твоєї кандидатури?

  • Досвід роботи з даними від 1 року;
  • Впевнені знання Python в контексті створення ETL data pipelines (Pandas, pyodbc);
  • Навички автономної роботи зі сторонніми API;
  • Відмінні скіли роботи з SQL (PostgreSQL, MySQL, Vertica);
  • Розуміння побудови архітектури баз даних;
  • Досвід роботи з хмарними сервісами;
  • Досвід роботи із середовищем зберігання та/або обробки великих даних (наприклад Apache Spark, Snowflake, BigQuery та подібні);
  • Розуміння процесу роботи з інструментами оркестрації (Kubernetes, Apache Airflow).

Чому OBRIO те саме місце для продовження твоєї кар’єри? Бо ми стараємось забезпечити:

  • Можливість працювати з різними продуктами, вибирати найбільш цікаві напрямки для кар'єрного зростання;
  • Свободу прийняття рішень, вибір сучасних інструментів і підходів, можливість впливати на продукт;
  • Гнучкий розподіл завдань і пряме спілкування з маркетинговими і продуктовими командами;
  • Можливість вибудовувати аналітичні процеси на основі своєї експертної точки зору;
  • Необмежене зростання: є можливість проявити ініціативу, виконати більш амбітні завдання, побудувати аналітику з нуля або створити власний продукт на базі команди.

Умови та бенефіти, які ми пропонуємо:

  • Можливість працювати з будь-якої безпечної точки світу віддалено;
  • 20 days off на рік, необмежену кількість лікарняних днів коштом компанії;
  • Забезпечення технікою у разі необхідності;
  • Онлайн послуги корпоративного лікаря коштом компанії, медичне страхування в Україні або компенсація фіксованої суми медичного страхування закордоном після проходження випробувального терміну;
  • Велика корпоративна бібліотека (купуємо всю необхідну літературу, вебінари, майстер-класи), внутрішні онлайн мітапи та лекції;
  • Компенсація навчання;
  • Корпоративна культура: допомога з релокацією в безпечні місця, консультування з питань легального перебування закордоном, інформація щодо підтримки громадян третіми країнами, допомога з пошуком житла;
  • Онлайн івенти та тімбілдинги.

Приєднуйся до нашої команди????

Дізнавайся більше про нас в соцмережах:Instagram,LinkedIn, Facebook.

See more jobs at Genesis

Apply for this job

15d

Big Data Engineer

tableausqloracleDesignc++python

Techstra Solutions is hiring a Remote Big Data Engineer

Big Data Engineer - Techstra Solutions - Career Page

See more jobs at Techstra Solutions

Apply for this job

17d

Data Engineer I - Remote

sqlDesignpythonAWS

Help At Home is hiring a Remote Data Engineer I - Remote

Data Engineer I - Remote - Help At Home - Career PageSee more jobs at Help At Home

Apply for this job

17d

Data Engineer II - Remote

airflowsqlDesignpythonAWS

Help At Home is hiring a Remote Data Engineer II - Remote

Data Engineer II - Remote - Help At Home - Career Page (function(d, s, id) { var js, iajs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)){return;} js = d.createElement(s); js.id = id;js.async = true;

See more jobs at Help At Home

Apply for this job

27d

Analytics Engineer / Data Engineer

Education AnalyticsMadison Preferred, WI Remote
airflowpostgressqlDesigngitpythonAWS

Education Analytics is hiring a Remote Analytics Engineer / Data Engineer

Analytics Engineer / Data Engineer

Education Analytics strives to deliver sophisticated, research-informed analytics to educators and school administrators to support their work in improving student outcomes. To support them best, the data they receive must be accurate, up to date, secure, and easily accessible. The person in this role will be a critical team member who helps to make that a reality.

We are seeking a full-time Data Engineer or Analytics Engineer to lead the design, build, and maintenance of automated data pipelines and analytic systems. An ideal candidate has strong SQL skills and experience with data warehousing concepts, familiarity with complex data integration and/or analysis, and an interest in improving K-12 education.

This role supports the timely delivery of data and analytics to educators and administrators who use this data to drive change and improvement in education. We are looking for candidates who are innovative, hard-working, and curious to help us continue to develop our team's capacity in the development and use of cutting-edge tools. Our team is consistently evaluating tools for new projects and looking for the best tools for the job. Our current stack uses an ELT approach via Apache Airflow and dbt to create data warehouses in Snowflake or Postgres, depending on the scale of the data. These posts illustrate some projects that members of our team might work on.

Responsibilities

  • Lead the design and implementation of data warehousing structures for research, analytics, and reporting/dashboarding
  • Apply best practices from software engineering to data pipelines
  • Help implement code testing, continuous integration, and deployment strategies to ensure system reliability
  • Design and implement complex pipelines to integrate data coming from a mix of APIs, flat files, or other database sources
  • Develop and improve internal tools and systems to efficiently deliver high-quality, actionable metrics
  • Work collaboratively within a team of analysts, school system leaders, and other engineers to create analytics solutions that are scalable, easy to maintain, and support high quality research
  • Explore and apply new cutting-edge tools to drive innovation across a variety of projects

Qualifications

  • Experience architecting data warehouse and data lake structures that are intuitive and performant
  • Knowledge of best design practices in modern cloud-based data warehouses
  • Experience designing, implementing, and maintaining modern ELT pipelines with a clean code-base
  • Fluency in SQL, experience with Python and Linux.
  • Knowledge of software engineering best practices, particularly in team-based development using Git
  • Ability to proactively identify and defend against potential data quality & processing issues

Bonus Skills:

  • Experience with cloud-based columnar data warehouses (Snowflake, RedShift, BigQuery)
  • Experience with Data Build Tool (dbt)
  • Experience with Apache Airflow, or other modern data pipeline systems
  • Desire to work with cutting edge tools in a fast-paced environment
  • Familiarity with AWS tooling and best practices

Hiring Process

  1. Hiring team reviews resumes and cover letters
  2. Selected candidates invited to 30-minute interview with Data Engineering team managers to discuss skills and experience alignment
  3. Selected candidates invited for full day final interview. Candidates are sent a skills exercise in advance that will be discussed in the interview. This doesn’t require any coding, it’s about concepts and planning of data systems, and doesn’t require any pre-submitted work. In addition to discussing the exercise, there will be another approximately 2-3 hours of interviews to meet other Data Engineering team members and key members of other teams and to help candidates learn more about Education Analytics & the role.

How you will successfully onboard in this role

In your first few weeks, you will work through a training exercise our team has developed that familiarizes our new hires with our development setup & tooling, and join team meetings and 1-1 check-ins. From there, you will likely work on 1-2 projects and begin joining project meetings to gain familiarity with the context of the work we do. Next you will start to take on smaller tasks in those projects, and by 3-6 months in, begin to take the lead on larger initiatives.

Additional details

The weekly hour expectation is 45 hours per week, and nights and weekends are sometimes required. Our preference is for candidates to primarily work from EA’s office in Madison, WI.

About us: Education Analytics is a non-profit organization that uses data analysis to inform education policy decisions. We work with school districts, regional offices of education, non-profits, and policymakers to identify ways to make education systems better.

Benefits:

· Competitive salary

· Annual merit bonuses

· Paid holidays and one month of paid vacation per year

· Generous 401k and health benefits

· Parental leave benefit of up to 26 weeks of paid leave

· Free Madison Metro transit pass or subsidized office parking

· Casual office environment

· Location right in the heart of downtown Madison, WI

Education Analytics is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

See more jobs at Education Analytics

Apply for this job

+30d

Senior Google Cloud Data Engineer

Nordcloud FinlandHelsinki, FI; Jyväskylä, FI; Salo, FI; Oulu, FI; Kuopio, FI Remote
agileterraformscalaairflowsqlDesignmongodbazureAWS

Nordcloud Finland is hiring a Remote Senior Google Cloud Data Engineer

We are digital builders born in the cloud and currently, we are looking for a Senior Google Cloud Data Engineer.

Joining Nordcloud is the chance of a lifetime to leave your mark on the European IT industry! We use an agile, cloud-native approach to empower clients to seize the full potential of the public cloud.

Your daily work:

  • Designing, architecting, and implementing modern cloud-based data pipelines for our customers
  • Selecting appropriate cloud-native technologies/services to provide the most efficient solution for the business use-case
  • Making data accessible and usable for a new wave of data-powered apps and service
  • Solutioning with and challenging customers to solve real use-cases problems
  • Supporting pre-sales to design a proposal with the relevant teams (sales, pre-sales, product leads...)
  • Operating CI/CD (DevOps) pipelines and performing slight customization on them if/when needed
  • Understanding and writing small Infrastructure as Code modules/templates in Terraform if/when needed

Your skills and attributes of success:

  • Several years overall of professional programming experience and 3+ years of hands-on experience in building modern data platforms/pipelines in Google Cloud
  • At least one programming language: Python/Scala/Java
  • Experience in job orchestration (Airflow, Composer, Oozie, etc.)
  • At least two of the following skill sets including mandatory Google Cloud skills:
    • Experience in Google Cloud:
      • Google Cloud Professional Data Engineer certification
      • Other active certificates are a plus
      • BigQuery skills with querying tables, designing schemas and structuring tables with partitioning/clustering when relevant
      • Pub/Sub experience
      • Dataflow (Apache Beam) or Dataproc (Apache Spark) framework experience, meaning architecting, creating, implementing, and maintaining data pipelines within these frameworks with streaming and batch workloads
      • Google Cloud Storage (Lifecycle policies, accesses...)
      • Datastore experience is a plus
      • Looker studio experience is a plus
      • Knowledge of more than one cloud is a plus (Azure is preferred in addition to Google Cloud)
    • Experience in Big Data technologies (in the cloud or on-prem):
      • Spark (Scala or pySpark)
      • Hadoop, HDFS, Hive/Impala, Pig, Hbase, Kafka, NiFi, ... (not all technologies are needed, this is just to give an idea)
      • Familiarity with Big Data file formats (parquet, AVRO, ORC, ...)
      • Experience with Lakehouse formats is a plus (Delta lake, Apache Iceberg, Apache Hudi)
      • MongoDB or Cassandra is a plus
      • Experience with data lake architectures and designing data lake architecture
      • Familiarity with data-mesh architectures is a plus
  • Data warehousing experience:
    • Migrating from on-prem to cloud
    • Data modeling (Kimball, Inmon, Data Vault, etc) is a plus
    • Advanced SQL in any SQL dialect/framework
    • Experience building ETL processes for data warehousing solutions
  • Consultancy experience
  • Leadership and people skills are a strong plus
  • Previous experience gained in mid-size/large, international companies
  • Fluent communication skills in English and Finnish

If you don’t meet all of the desired criteria, but still fit most of the requirements, we encourage you to applyanyway. Let’s find out together if we are a good fit for each other!

What do we offer in return?

  • A highly skilled multinational team
  • Individual training budget and exam fees for partner certifications (Azure, AWS, GCP) and additional certification bonus covered by Nordcloud
  • Access to join and the possibility to create knowledge-sharing sessions within a community of leading cloud professionals
  • Flexible working hours and freedom to choose your tools (laptop and smartphone) and ways of working
  • Freedom to work fully remotely within the country of Finland
  • Local benefits such as extensive private health care and wellness benefits

      Please read our Recruitment Privacy Policy before applying. All applicants must have the right to work in Finland.

      Learn more about #NordcloudCommunity. If you’d like to join us, please send us your CV or LinkedIn profile.

      About Nordcloud

      Nordcloud, an IBM company, is a European leader in cloud advisory, implementation, application development, managed services, and training. It’s a recognized cloud-native pioneer with a proven track record of helping organizations leverage the public cloud in a way that balances quick wins, immediate savings, and sustainable value. Nordcloud is triple-certified across Microsoft Azure, Google Cloud Platform, and Amazon Web Services – and is a Visionary in Gartner’s Magic Quadrant for Public Cloud IT Transformation Services. Nordcloud has 10 European hubs, over 1500 employees, and counting, and it has delivered over 1,000 successful cloud projects.

      Learn more at nordcloud.com

      #LI-Remote

      +30d

      Data & Analytics Engineer

      SkiptownCharlotte, NC Remote
      nosqlsqlFirebase

      Skiptown is hiring a Remote Data & Analytics Engineer

      The Data & Analytics Engineer will work directly with the Head of Engineering to improve and scale data infrastructure at Skiptown, the all-in-one ecosystem for pets and their people, assisting with Skiptown’s growth as we build 50+ locations in the next 5 years.

      As our Data & Analytics Engineer you will…

      • Define the data pipeline architecture and build/maintain the data analytics infrastructure for the organization across our 5 enterprise and client-facing applications
      • Collaborate with business leaders to build the roadmap for internal data products that support all departments (with a focus on Operations and Marketing)
      • Produce data pipelines that allow for high levels of customized communication with our large clientbase through internal products and external marketing tools
      • Create usable data products (dashboards, visualization tools, etc.) to support KPI tracking and team access to data insights
      • Provide clean, transformed, and highly accurate data to be piped into relevant systems
      • Support the two-way integration of external software and tools with our internal data
      • Act as a Q/A tester for any large product releases

      The ideal Data & Analytics Engineer is someone who…

      • Lives and breathes data, and gets excited about data warehousing, creating innovative data solutions, and maintaining data quality
      • Is eager to architect a data analytics infrastructure from the ground up
      • Is proficient in selecting and implementing systems to support data analysis and pipeline development
      • Has the skills to effectively communicate with technical and non-technical people
      • Has an entrepreneurial spirit - you’re comfortable with solving unfamiliar problems

      Requirements:

      • 3+ years of relevant work experience
      • Experience with both SQL and NoSQL databases, warehousing technologies, ETL/ELT, and event tracking tools
      • Experience building data pipelines from scratch and selecting the tools to support those pipelines
      • Bonus: knowledge of dbt, BigQuery, Firebase, Google Analytics, and other similar platforms
      • Startup experience preferred

      Structure & Benefits:

      • Prefer candidates based in Charlotte, NC or Atlanta, GA. Will consider remote work for the right candidate.
      • Medical, Dental and Vision Insurance
      • Unlimited PTO
      • DoorDash DashPash

      About Skiptown:

      Skiptown is on a mission to make the lives of pets and their people easier, and even more fun, through a tech-enabled, premium pet services ecosystem, and state-of-the-art facilities. Our 24,000 sq ft flagship location in Charlotte offers dog daycare, boarding, grooming and a social bar - and will soon expand to include retail, veterinary, training and transportation services. Skiptown is well-funded and preparing to launch 50+ locations across the country in the next 5 years.

      See more jobs at Skiptown

      Apply for this job

      +30d

      Data Engineer

      BlueLabsRemote or Washington, District of Columbia, United States
      tableauairflowsqloraclemobilegitjavac++postgresqlpythonAWS

      BlueLabs is hiring a Remote Data Engineer

      About BlueLabs

      BlueLabs is a leading provider of analytics services and technology for a variety of industry clients: including government, business, and political campaigns. We help our clients optimize their engagements with individual customers, supporters, and stakeholders to achieve their goals. Simply put: we help our partners do the most good by getting the most from their data. 


      Today, our team of data analysts, scientists, engineers, and strategists come together from diverse backgrounds to share a passion for using data to solve the world’s greatest social and analytical challenges. We’ve served more than 400 organizations ranging from government agencies, advocacy groups, unions, political campaigns, international groups, and companies. Along the way, we’ve developed some of the most innovative tools available in analytics, media optimization, reporting, and influencer outreach-- serving a diverse set of industries, including the automotive, travel, consumer packaged goods, entertainment, healthcare, media, telecom, and more.


      About the team:

      The Insights division creates and manages the underlying data that drives our day-to-day work. This is a new team at BlueLabs, created to meet our organization’s evolving client base and business needs, to ensure that BlueLabs is providing the most innovative data and analysis to our clients, all while developing and coaching team members to grow and respond to our company’s goals.

       

      The BlueLabs Insights practice works with non-profit, political, and private sector clients to provide them with high quality analysis to better understand the environment they operate in and inform future decision making. 


      Ripple is a proprietary, business-to-business technology product that helps our clients identify, engage, and measure impact on influencers who matter to their causes or brands. You will join a small and growing team of data, engineering, and product professionals to help us solve some exciting and challenging problems and to deploy the next generation of Ripple. 


      About the role:

      As a Data Engineer, you will help the Ripple team establish and maintain data pipelines for internal data sources and data sources related to our client engagements. The Data Engineer is critical to our client work, which requires proactive and continuous improvement as well as  prompt responsiveness to changing circumstances – particularly in handling data quality issues and adjusting the team’s end-to-end pipeline deployments. Your track record stepping into this role should reflect domain knowledge in data ingestion and transformation, including experience adapting to changing technologies and/or client priorities.  The Data Engineer reports to the Ripple Product Manager. 


      In this position you will:

      • Analyze, establish and maintain data pipelines that regularly deliver transformed data to data warehouses
      • Read in data, process and clean it, transform and recode it, merge different data sets together, reformat data between wide and long, etc.
      • Create documentation for data pipelines and data sets
      • Work closely with data analysts to understand, identify and effectively respond to their specific needs
      • Coordinate with the vendors, clients, and other stakeholders as needed to stand-up or respond to issues with data pipelines
      • Perform one-off data manipulation and analysis on a wide variety of data sets
      • Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks
      • Make recommendations and provide guidance on ways to make data collection more efficient and effective
      • Develop and streamline our internal data resources into more efficient and easier to understand taxonomies
      • Ensure a high-level of data accuracy through regular quality control


      What we are seeking:

      • About 2+ years of experience working with data pipeline solutions; beyond exact years, we seek candidates whose experience working with data pipelines provides the ability to proactively advise and then deploy solutions for our client based teams
      • Experience processing data using scripting languages like Python or R, or compiled languages like Java, C++, or Scala. Python preferred
      • Understanding of how to manipulate data using SQL or Python
      • Experience designing data models that accounts for upstream and downstream system dependencies
      • Experience working in modern data processing stacks using tools like Apache Airflow, AWS Glue, and dbt
      • Experience with an MPP database such as Amazon Redshift, Vertica, BigQuery, or Snowflake and/or experience writing complex analytics queries in a general purpose database such as Oracle or Postgresql
      • Familiarity with Git, or experience with other version control system
      • A high attention to detail and ability to effectively manage and prioritize several tasks or projects concurrently 
      • Effective communication and collaboration skills when working with team members of varied backgrounds, roles, and functions
      • Passion in applying your skills to our social mission to problem-solve and collaborate within a cross-functional team environment
      • Ability to diagnose and improve database and query performance issues


      You may also have experience:

      • Working on political campaigns or in progressive advocacy
      • Working with voter files or large consumer datasets
      • Working with geospatial data
      • Working with business intelligence software like Tableau, PowerBi, or Data Studio


      Recruitment process

      We strive to hire efficiently and transparently. We expect to hire this position in January 2023. To get there, we anticipate the successful candidate will complete three interviews (HR 15 minutes, technical interview 45 minutes, and team interview 60 minutes), all virtually. 


      What We Offer:

      BlueLabs offers a friendly work environment and competitive benefits package including:

      • Premier health insurance plan
      • 401K matching
      • Unlimited vacation leave
      • Paid sick, personal, and volunteer leave
      • 15 weeks paid parental leave
      • Professional development & learning stipend
      • Macbook Pro laptop & tech accessories
      • Bring Your Own Device (BYOD) stipend for mobile device
      • Employee Assistance Program (EAP)
      • Supportive & collaborative culture 
      • Flexible working hours
      • Telecommuting/Remote options
      • Pre-tax transportation options 
      • Lunches and snacks
      • And more! 


      The salary for this position is$85,000annually. 


      While we prefer this position to be in the Washington, DC area, we are open to considering candidates from within the U.S. 


      At BlueLabs, we celebrate, support and thrive on differences. Not only do they benefit our services, products, and community, but most importantly, they are to the benefit of our team. Qualified people of all races, ethnicities, ages, sex, genders, sexual orientations, national origins, gender identities, marital status, religions, veterans statuses, disabilities and any other protected classes are strongly encouraged to apply. As an equal opportunity workplace and an affirmative action employer, BlueLabs is committed to creating an inclusive environment for all employees. BlueLabs endeavors to make reasonable accommodations to the known physical or mental limitations of qualified applicants with a disability unless the accommodation would impose an undue hardship on the operation of our business. If an applicant believes they require such assistance to complete the application or to participate in an interview, or has any questions or concerns, they should contact the Director, People Operations.  BlueLabs participates in E-verify.

      See more jobs at BlueLabs

      Apply for this job

      +30d

      Data Engineer

      TrupanionRemote, Canada
      agilesqlazure

      Trupanion is hiring a Remote Data Engineer

      Description

      Trupanion is a leading provider of medical insurance for cats and dogs in North America. Our mission is to help the pets we all love receive the veterinary care they need. At Trupanion, we offer a collaborative, casual, and pet-friendly environment where everyone is encouraged to be themselves.

      Position Summary:

      The Data Engineer is responsible for designing and building reliable, scalable and auditable data systems and tools for deploying and monitoring our mission-critical data systems and underlying infrastructure in Microsoft Azure. You’ll work with your team, peers, and executives to drive the evolution of Continuous Delivery for continuing growth and easier management.

      Candidates for this position have the option to work remotely from anywhere in Canada.

      Experience:

      • Bachelor’s level degree in Computer Science, Engineering, or appropriate work experience required
      • Minimum 2+ years of experience designing, building, and enhancing ETL processes for data warehousing, data marts, data integrations, data analysis across the enterprise
      • Minimum 2+ years of technical experience with data warehousing in a Microsoft/Azure environment with the following tools or similar: SQL Server, Azure SQL, Azure Synapse Analytics, Azure Data Lake Storage - ADLS Gen2, Azure Streaming Analytics, DataBricks, DeltaLake, Apache Spark/SparkSQL. Experience with similar tools in a Linux/AWS environment a plus.
      • Experience required with software and infrastructure change management, release management and source code control
      • Experience with Infrastructure as Code (IaC) and Policy as Code (PaC) a plus

      Skills,Knowledge& Abilities:

      • Proven experience in designing and implementing data pipelines for a variety of flows (data integrations across systems, ETL/ELT pipelines, streaming analytics, big data analytics)
      • Hands-on experience with logging and monitoring solutions
      • Experience with Microsoft Azure IaaS & PaaS solutions
      • Experience driving to towards Continuous Delivery while valuing and maintaining a strong attention to detail
      • Experience with Agile software development organizations
      • An ability to quickly identify and drive to the optimal solution when presented with a series of constraints

      Compensation:

      • The salary range for this position is $153,180-$166,500 CAD on a full-time schedule
      • Along with the base salary, Trupanion employees may be eligible for monthly bonuses.
      • Trupanion may also provide Restricted Stock Units, which vest over 4 years

      Benefits and Perks:

      • Employer-paid extended health coverage for you and your family
      • Trupanion will partner with Wealthsimple to register your RRSP, pension, etc.
      • Four weeks of paid time off
      • Five weeks, paid, sabbatical after five years of employment
      • Employer-paid medical insurance for one pet (cat or dog)
      • Paid time off to volunteer at nonprofit organizations
      • Open, casual, pet-friendly, and fun work environment

      AboutTrupanion:

      We’re all about helping pets. We promote a cohesive and nimble team environment, and we hire, develop and promote team members. We trust each other. We are transparent and honest. We care about one another and want to see our team members succeed, personally and professionally. We strive to promote from within and reduce bureaucracy to allow creative thinking. We’re focused on providing continuous training and support to all team members to encourage long-term happiness and success.

      Take a look inside our office and see for yourself:

      https://www.facebook.com/Trupanion/videos/10155423763702974/

      We’re more than insurance – we’re a tech company too! Learn more about how you can use Trupanion to pay your vet directly here: https://www.youtube.com/watch?v=vdWZ4KHiPTQ

      TrupanionTeam DNA:

      At Trupanion, we achieve great things together when we are:

      • Caring:We are kind to each other and assume positive intent.
      • Collaborative:We work together to achieve company goals (we not me).
      • Courageous:We are determined, take risks, and make bold moves.
      • Curious:We seek new information to continually better ourselves and our work.
      • Honest:We believe candid communication leads to successful teamwork.
      • Inclusive:We welcome and value all people and perspectives.
      • Nimble:We readily adapt and evolve in pursuit of progress and innovation.

      For more information about Trupanion, visit http://trupanion.com/about

      Trupanion is an equal opportunity employer and embraces diversity. We are committed to building a team that represents a variety of backgrounds, abilities, perspectives, and skills.

      We will ensure that individuals are provided reasonable accommodation to participate in the job application or interview process, perform essential job functions, and receive other benefits and privileges of employment. Please contact us to request accommodation.

       #LI-Remote

      See more jobs at Trupanion

      Apply for this job

      +30d

      Data Engineer

      National FundingSan Diego, CA Remote
      tableauscalasqlDesignpythonAWS

      National Funding is hiring a Remote Data Engineer

      Data Engineer- San Diego, CA (La Jolla/UTC area)

      Hybrid or Remote, Full time M-F 8am-5pm PST, Working Onsite in San Diego from time to time

      Being authorized to work in the U.S. is a precondition of employment.

      National Funding is not considering candidates requiring 1099 or C2C.

      Exempt/Salary: $101k-151k + Bonus

      The role is Remote, but ONLY in these particular states: Arizona, California, Georgia, Nebraska, Florida, Utah, Louisiana, Missouri, Oklahoma, Pennsylvania, Tennessee, Texas.

      National Funding is continuing to grow its Data Science Department and has an exciting opportunity for a Data Engineer. Reporting to the Director, Data Science, the Data Engineer will help streamline our data science workflows, adding value to our product offerings and building out lifecycle and retention models. In addition, work closely with data science and business intelligence teams to develop data models and pipelines for research, reporting and machine learning.

      Major Responsibilities:

      • Analyze, combine and organize raw data for multiple sources
      • Design and build data architecture and pipelines
      • Prepare data for prescriptive and predictive analytics
      • Explore ways to enhance data quality and reliability
      • Partner with internal stakeholders to identify opportunities, evaluate business needs, and to create solutions that will improve decision making.
      • Use quantitative analysis, machine learning and data science tools to improve our processes
      • Design metrics, reports, dashboards and BI solutions for data quality, and various data science applications
      • Interpret trends and patterns
      • Develop analytical tools and programs
      • Collaborate with data scientists and architects on several projects

      Knowledge, Skills, and Abilities Required:

      • Bachelors or master’s in computer science, or related technical field.
      • 4+ years of Data Science or Data Engineering experience, specifically with data pipelines, ETL, data modeling and data architecture
      • Hands-on experience designing and implementing AWS based data storage
      • Strong programming skills, especially in Python or Scala
      • Expertise in SQL
      • Experience with distribute data/computing tools such as Spark
      • Experience with data presentation and visualization with tools such as Tableau
      • Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data
      • Excellent communication and presentation skills.

      Why National Funding?

      • Positive, energetic, passionate, business casual environment with management who commits to your success
      • Fantastic benefits package: Our current benefit package includes medical, dental, vision, life, LTD and AD&D insurance as well as a 401(k) Retirement Savings plan with an employer match. Eligibility for all benefits will start the first of the month following 60 days of employment.
      • Numerous employee events throughout the year, including our annual traditions such as a Day at the Del Mar Racetrack, Del Mar Mud Run, Bring Your Kid to Work Day, Holiday Party, Employee and Family Picnic, sporting events and more.

      National Funding is one of the leading providers of short term loans and equipment leasing for small businesses across the United States. In both 2013 and 2014, we were ranked by the San Diego Business Journal as one of the 100 Fastest Growing Private Companies in San Diego and listed on the Inc. 5000 List of America’s Fastest Growing Private Companies. We serve the small business community nationwide by offering a range of financial services and products. Since 1999, we have been in the forefront of the equipment leasing business, working with businesses in hundreds of communities and industries to expand and upgrade their business equipment. As we have grown, so too has our product line, and now we are one of the country’s largest private lenders of small business loans. Our customers call on us to get working capital, merchant cash advances, credit card processing, and, of course equipment leasing.

      Please apply through our website at www.NationalFunding.com/Careers and attach your resume. National Funding is an Equal Opportunity Employer.

      See more jobs at National Funding

      Apply for this job

      +30d

      Data Engineer - Databricks

      agiletableaujirasqlazuregitc++python

      Penn Foster is hiring a Remote Data Engineer - Databricks

      Data Engineer - Databricks - Penn Foster - Career Page

      See more jobs at Penn Foster

      Apply for this job

      +30d

      Data Engineer Coach

      MakersLondon, GB Remote
      Design

      Makers is hiring a Remote Data Engineer Coach

      Data Engineering Coach

      We're always looking for great coaches. Whenever you read this, we're ready to hear from you.

      Everyone deserves a job they love. Makers trains people as software engineers, helps them find that job, and then helps them thrive.

      You deserve a job you love. We’ve got one for you.

      Makers is a professional school for software engineers. We began as Europe’s first ‘coding bootcamp’, where career changers retrained as software developers. Since 2013 we’ve helped more than 2,000 people begin a new career in software that way. We’re one of the UK’s biggest software developer apprenticeship providers, helping people train at no cost to them, and we’re not stopping there. We partner with the UK’s most ambitious organisations to design innovative training from DevOps, to Data, Quality Engineering, Leadership, and beyond.

      We’re pretty good at what we do, according to our clients (both developers and employers), industry bodies, the press and the government. We can only get better by working with you.

      Overview

      You want to be more than just an engineer or a manager or a teacher or a consultant: you want to help people progress in their careers at an unbelievable rate. You want to help them change their lives, to power the organisations they work with, and to make the industry and world a better place.

      As a technical coach at Makers, your role is to help people grow. You’ll join our team of software industry professionals working across all of our programmes, including:

      • Immersive Engineering Training
        The core of our approach is immersive engineering training. Learners of all backgrounds join us full-time to gain modern tech engineering skills and become adaptive self-directed learners who can quickly become productive members of any dev team. Coaches run workshops, provide 1:1 support, give feedback and overall ensure that learners learn.

      • Specialist Training
        Some of our engineers go on to study specific specialisms targeted towards their roles. For example, DevOps and SRE, Engineering Leadership, or use of specific stacks. Coaches take on the challenge of curriculum development and iteration, supported by our educational tools and ethos. Do you have a specialism you want to share? Let’s talk about it!

      • In-Role Coaching
        The first year in the industry is an exciting and challenging experience. Our technical coaches also support on that journey to help learners navigate the team environment, and accelerate their learning.

      • Interview Coaching
        Performing well at a technical interview can be a real test for any dev, particularly at the beginning of their career. We coach developers in taking their skills to the next level, navigating the job-hunt, and understanding what they need to demonstrate to interviewers.

      • Curriculum Development and Innovation
        We’re on the cutting edge of technical education and are always searching for new opportunities to help people learn. No matter your path as a coach, you can expect to gain the skills not just to teach but to design highly effective educational experiences — and potentially advance the field of technical education as a whole.

      You’ll have the opportunity to work in all of these areas as a part of building your practice as an effective technical educator. You may move around to learn new educational techniques and styles, or spend time gaining in-depth expertise in one area.

      In all of these areas, you’ll play each of these roles:

      • Coach.
        Through active listening and questions, you’ll help learners reflect on their learning, their goals, the problems they are facing and the solutions they may try.

      • Mentor.
        You’ll use your own experience as a professional developer to model skills and behaviour necessary to be a dev.

      • Technical expert.
        You’ll use your technical knowledge to give expert feedback to learners as well as to help guide them towards solving their own problems.

      • Emotional support.
        The learning experience can be intense. Learners are often under a lot of stress and financial pressure. You should be there to support them and help them make the best choices they can.

      • Education expert.
        Using recognised educational tools, you’ll help the learners organise their learning, sequence material, rebuild their mental models and ultimately become independent self-learners.

      We’re building the best software engineering school in the world. We want our approach to be the foundation of how technical skills are taught everywhere. Come and be a part of it.

      You should:

      • have experience as a professional software engineer (any background — including DevOps, SRE, SDET, Data)
      • have experience helping people to grow (coaching, teaching, mentoring, tutoring, or something similar. This might be informal as part of a hobby or volunteering, or it could have been as part of a management role)
      • be comfortable talking to people (you’ll need to coach learners 1:1, as well as run workshops in front of groups of up to 30 learners)
      • be a self-led learner yourself (both to model behaviour to learners, and because our culture will need you to proactively look for information and feedback)

      Perks

      • Working in a company that’s built on trust over fear with a core mission to transform lives;
      • Growing as a person, by learning transparency, vulnerability, the growth mindset, emotional intelligence, perhaps some hard truths about yourself, and how to give and receive feedback honestly and productively;
      • Able to join weekly yoga and meditation sessions run for the learners;
      • 15% discount on classes & treatments at Triyoga Shoreditch;
      • Company pension contributions;
      • Unlimited holiday - we have a minimum holiday policy, rather than a maximum. We encourage our team to take at least 30 days a year, including a Winter break when we shut down between late December and January. The whole company takes a break, so you’ll have peace of mind in knowing that nobody is working, not just you.
      • Private Medical Insurance (after you've been with us at least 3 months);
      • Being surrounded by a diverse group of bright, motivated people, who all really care about doing the very best they can for our learners, for our hiring partners and for each other.

      At Makers, diversity and inclusion are core to our mission. Ensuring our people feel included and valued is critical for us to live the Makers' values: Nurture a growth mindset, Trust over fear, and Prioritise joy. We are actively working towards fostering a strong culture of belonging for both our students and our people and encourage applications from all backgrounds, abilities, communities, and industries. We see the value behind the new ideas you could bring to help us achieve our mission.

      If reading this job description has given you any doubt about whether you’d feel welcome or included at Makers, first, we’re sorry. Second, we’d really like to hear from you about it so we don’t do it again.

      Contact Sandy Vo - Talent Partner, if you have any questions or require accommodations / adjustments to be made.


      See more jobs at Makers

      Apply for this job

      +30d

      Senior Data Engineer

      InstalentBudapest, HU Remote
      agilescalaairflowdocker

      Instalent is hiring a Remote Senior Data Engineer

      Our partner is a rapidly expanding international technology and data science business. They build high quality SaaS solutions which automate data science using advanced machine learning and deep learning techniques. They use some of the trendiest technology on the planet so you will never get bored of doing the same thing.

      About the role

      They are looking for a talented developer to join their team. The job entails building and operating high performance big data pipelines to facilitate all their SaaS products for some of the world’s leading brands. You'll be part of a remote team of developers and data scientists based in the United Kingdom, South Africa and Hungary.

      Requirements:

      • Experience writing testable functional Scala in a production grade system.
      • To have utilized Apache Spark in a production system utilizing Scala
      • Having worked with Spark orchestration technologies like Apache Airflowis a plus.
      • Experience of using a cloud platform to architect and build data pipelines.
      • To be a developer who can quickly take up a new technology and deliver features in an extremely agile way.
      • To easily navigate the administration of a Hadoop cluster on a cloud platform such as Databricks
      • To have used Docker containers to deploy your systems
      • Having a strong JVM development background including use of Spring
      • Having worked with data streaming for example Kafka

      What they offer:

      • Remote and flexible working
      • Competitive Salary
      • Career Development
      • Exciting Clients and Projects
      • Talented Teams
      • Benefits: stock option plan, staff referral scheme, quarterly staff events, wellness day, volunteering opportunities, birthday lie in, sunshine hours, Christmas gift cards, Flexible Leave Policy, private health care, cafeteria system, enhanced maternity & paternity leave, drinks & snacks, fruit.

      See more jobs at Instalent

      Apply for this job

      +30d

      Senior Data Engineer

      airflowsqlDesignrubyjavapythonAWSjavascript

      Brightside is hiring a Remote Senior Data Engineer

      Senior Data Engineer - Brightside Health - Career Page

      See more jobs at Brightside

      Apply for this job

      +30d

      Data Engineer

      Expression NetworksWashington, DC Remote
      agileAbility to travelnosqlsqlDesignscrumjavac++pythonjavascript

      Expression Networks is hiring a Remote Data Engineer

      Founded in 1997 and headquartered in Washington DC, Expression Networks provides data fusion, data analytics, software engineering, information technology, and electromagnetic spectrum management solutions to the U.S. Department of Defense, Department of State, and national security community. Expression’s “Perpetual Innovation” culture focuses on creating immediate and sustainable value for our clients via agile delivery of tailored solutions built through constant engagement with our clients. Expression Networks was ranked #1 on the Washington Technology 2018's Fast 50 list of fastest-growing small business Government contractors and a Top 20 Big Data Solutions Provider by CIO Review.

      We make sure to provide everyone the tools and opportunities to grow while working on some of the newest technologies in the industry. With Covid-19 being a major theme the last two years having a growing collaborative culture has been one of the key focus of our C-suite and upper management. We get excited about celebrating our professionals' milestones, accomplishments, promotions, overcoming challenges, and many other aspects that make an engaging collaborative environment.

      We are looking to bring on a mid-level Data Engineer to add to the continued growth of our Data Science division. This position will work in a team led by a principal data engineer on tasks related to designing and delivering high-impact data architecture and engineering solutions to our customers across a breadth of domains and use cases.

      Location:

      • Remote, with the ability to travel per project requirements.

      Security Clearance:

      • Ability to obtain Secret Clearance or Higher

      Responsibilities:

      • Developing, testing, and documenting software code for data extraction, ingestion, transformation, cleaning, correlation, and analytics
      • Participating in end-to-end architectural design and development lifecycle for new data services/products, and making them operate at scale
      • Participating in cross-functional team collaboration to understand customer requirements, design prototypes, and optimize existing data services/products
      • Demonstrating Data Science excellence in the teams you work with across the organization, and mentoring junior members in the Data Science division
      • Participating in research, case studies, and prototypes on cutting-edge technologies and how they can be leveraged

      Required Qualifications:

      • 3+ years of experience bringing databases, data integration, and data analytics/ML technologies to production with a Bachelor’s degree in Computer Science/Data Science/Computer Engineering or relevant field
      • Mastery in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.)
      • Expert knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Delta Lake)
      • Knowledgeable in machine learning/AI methodologies
      • Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto, etc.)

      Preferred Qualifications:

      • Experience in short release cycles and the full software lifecycle
      • Experience with Agile development methodology (e.g., Scrum)
      • Strong writing and oral communication skills to deliver design documents, technical reports, and presentations to a variety of audiences

      Benefits:

      Expression Networks offers competitive salaries and benefits, such as:

      • 401k matching
      • PPO and HDHP medical/dental/vision insurance
      • Education reimbursement up to $10,000/yr.
      • Complimentary life insurance
      • Generous roll over PTO and 11 days of holiday leave
      • Onsite gym facility and trainer
      • Commuter Benefits Plan
      • In office Cold Brew Coffee

      Equal Opportunity Employer/Veterans/Disabled

      See more jobs at Expression Networks

      Apply for this job

      +30d

      Senior Data Engineer

      Expression NetworksWashington, DC Remote
      agilenosqlsqlDesignscrumjavac++pythonjavascript

      Expression Networks is hiring a Remote Senior Data Engineer

      Expression Networks is looking to bring on a Senior Data Engineer to add to the continued growth we are seeing with our Data Science division. This position will lead a team in the design and execution of high-impact data architecture and engineering solutions to customers across a breadth of domains and use cases.

      About Expression Networks

      Founded in 1997 and headquartered in Washington DC, Expression Networks provides data fusion, data analytics, software engineering, information technology, and electromagnetic spectrum management solutions to the U.S. Department of Defense, Department of State, and national security community. Expression’s “Perpetual Innovation” culture focuses on creating immediate and sustainable value for our clients via agile delivery of tailored solutions built through constant engagement with our clients. Expression Networks was ranked #1 on the Washington Technology 2018's Fast 50 list of fastest growing small business Government contractors and a Top 20 Big Data Solutions Provider by CIO Review.

      We make sure to provide everyone with the tools and opportunities to grow while working on some of the newest technologies in the industry. With Covid-19 being a major theme over the last two years having a growing collaborative culture has been one of the main focuses of our C-suite and upper management. We get excited about celebrating our professionals' milestones, accomplishments, promotions, overcoming challenges, and many other aspects that make an engaging collaborative environment.

      Location:

      • Remote with required traveling for onsite client project delivery in the DC/VA/MD Metropolitan area.

      Security Clearance:

      • Eligible for Secret or higher level clearance

      Primary Responsibilities:

      • Directly working and leading others on the development, testing, and documentation of software code and data pipelines for data extraction, ingestion, transformation, cleaning, correlation, and analytics
      • Leading end-to-end architectural design and development lifecycle for new data services/products, and making them operate at scale
      • Partnering with Program Managers, Subject Matter Experts, Architects, Engineers, and Data Scientists across the organization where appropriate to understand customer requirements, design prototypes, and optimize existing data services/products
      • Setting the standard for Data Science excellence in the teams you work with across the organization, and mentoring junior members in the Data Science division

      Additional Responsibilities:

      • Participating in technical development of white papers and proposals to win new business opportunities
      • Analyzing and providing feedback on product strategy
      • Participating in research, case studies, and prototypes on cutting-edge technologies and how they can be leveraged
      • Working in a consultative fashion to improve communication, collaboration, and alignment amongst teams inside the Data Science division and across the organization
      • Helping recruit, nurture, and retain top data engineering talent

      Required Qualifications:

      • 4+ years of experience bringing databases, data integration, and data analytics/ML technologies to production with a PhD/MS in Computer Science/Data Science/Computer Engineering or relevant field, or 6+ years of experience with a Bachelor’s degree
      • Mastery in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.)
      • Expert knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Delta Lake)
      • Knowledgeable in machine learning/AI methodologies
      • Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto, etc.)
      • Experience in short release cycles and the full software lifecycle
      • Experience with Agile development methodology (e.g., Scrum)
      • Strong writing and oral communication skills to deliver design documents, technical reports, and presentations to a variety of audiences

      Benefits:

      Expression Networks offers competitive salaries and benefits, such as:

      • 401k matching
      • PPO and HDHP medical/dental/vision insurance
      • Education reimbursement
      • Complimentary life insurance
      • Generous PTO and holiday leave
      • Onsite office gym access
      • Commuter Benefits Plan

      Equal Opportunity Employer/Veterans/Disabled

      See more jobs at Expression Networks

      Apply for this job

      +30d

      Azure Data Engineer

      Default PortalLondon, GB Remote
      agilesqlDesignazure

      Default Portal is hiring a Remote Azure Data Engineer

      Azure Data Engineer

      Work Pattern: Permanent – Full-Time Hire

      Clearance: Active SC Clearance or eligible to be security cleared to SC level

      Location - Remote, on-site meetings may be requested

      Salary - Competitive +15% Bonus + Flexible Benefits (based on level of experience)


      The Company

      We are a specialist data, digital and cloud consultancy, focused on supporting our clients in successfully delivering on their digital transformation programmes. Our aim is to ensure we deliver value using innovative approaches that improve and expand their capabilities as well as their offerings to their customers.

      With demand for our services from our clients at an all-time high and continuous growth and success within our market sector, we are embarking on a major recruitment drive and keen to recruit talented data engineers to join our data engineering & analytics practice.

      The Role

      Working with high-profile clients, you’ll be working collaboratively with them, analysing their existing processes, identifying, and implementing opportunities to optimise these processes. You’ll be developing solutions to iteratively improve existing capabilities or creating entirely new capabilities.

      In this role, you will be working on a broad set of data initiatives for our clients, such as ensuring data pipelines are consistent and optimised across the delivery that you’re involved in. You’ll be proactive and comfortable supporting the data needs of multiple teams, systems and products.

      Key Responsibilities:

      • Working directly with clients to understand their business problems and translate these into data solutions, from prototype to production-ready code
      • Providing technical guidance and advice to help in the design and development of data solutions for data modelling and warehousing, data integration, and analytics.
      • Implementing and optimising data pipelines to connect operational systems and data for analytics and BI systems
      • Designing and developing scalable data ingestion frameworks to transform a wide variety of datasets
      • Researching, analysing, and helping implement technical approaches for solving challenging and complex development and integration problems, supporting the build out of a strategy and roadmap;
      • Collaborating with the wider Amber team to improve and maintain a knowledge base of best practices and delivery templates to standardize sales and delivery efforts.

      Requirements:

      • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
      • Experience building and optimizing data pipelines, architectures and data sets.
      • Strong experience with SSIS, comfortable maintaining and developing complex processes from scratch.
      • Ideally exposure to Azure utilising some of the following: Azure Synapse Analytics, Azure SQL, Azure Data Factory, Azure Data Lake, Databricks, and Cosmos DB
      • Working experience with version control platforms, ideally through Azure DevOps
      • Working knowledge of agile development including DevOps concepts
      • Experience in gathering and analysing system requirements
      • Good to have familiarity with data visualization tools such as Power BI
      • Experience of working within large, complex and geographically dispersed programmes.
      • We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has ideally attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

      What we offer to you?

      • We believe that our people are what makes us the company we are. We adopt a partnership model, ensuring that we all share in the success of the business. With only 4 levels in our hierarchy between Consultant to Partner, everyone is empowered to have a positive influence in the way we do things. A chance for you to grow your career with us, whist we scale-up our business.
      • We offer a comprehensive private health and dental insurance plan through Aviva, as the well-being of our team is one of our highest priorities.
      • We also have a fun rewards scheme with Perkbox, which can offer discounts and freebies on a variety of goods and experiences.
      • We support our employees progress through their careers, by offering to fund training programmes to help you upskill yourself.
      • The chance to work in a supportive and growth focused environment and learn from senior subject matter experts whilst also securing a competitive salary and excellent bonus and benefits package.
      • The chance to work at the forefront of the latest technologies and innovations, on cutting-edge projects and programmes that will allow you the autonomy to work independently.
      • To be part of a team that embraces the strengths of diversity and inclusion. A collaborative outlook where your voice and ideas are always heard.
      • A platform that will support and allow you to push your own ideas to deliver on projects successfully.
      • We believe the best impact is the value we add, not the hours we sit at our desks. We promote a good work/life balance for all our staff and welcome discussions about flexible working.

      Interested?

      Then please get in touch by applying with your most recent copy of your CV including a contact number and we will contact you directly to discuss further.

      We welcome applications from all suitably qualified people regardless of gender, race, disability, age or sexual orientation. All applications are assessed purely on merit, against the capabilities and competencies required to fulfil the position.

      See more jobs at Default Portal

      Apply for this job

      +30d

      Principal Data Engineer, Senior Manager (Remote)

      Aimpoint DigitalBoston, MA Remote
      scalasqlDesignazuregitjavac++dockerkubernetespythonAWS

      Aimpoint Digital is hiring a Remote Principal Data Engineer, Senior Manager (Remote)

      Equivalent of Manager, Senior Manager, Director, etc.

      What you will do 

      Are you an experienced and accomplished data engineer looking to apply your expertise to solving complex and interesting data and analytics challenges using the best modern tools?

      Aimpoint Digital is a fast-growing and fully-remote data and analytics consultancy. We partner with the most innovative software-providers in the data engineering space to solve our clients' toughest business problems. Our approach to data engineering blends modern tools and techniques with a respect for the foundations of our craft.

      You will:

      • Become a trusted advisor working together with our clients, from data owners and analytic users to C-level executives
      • Engage and lead multi-disciplinary teams to solve complex use-cases across a variety of industries
      • Assess existing analytics infrastructure and business processes and advise on best-in-class modern solutions
      • Design and develop the analytical layer, building cloud data warehouses, data lakes, ETL/ELT pipelines, and orchestration tools
      • Work with modern tools such as Snowflake, Databricks, Fivetran, and dbt
      • Write code in SQL, Python, and Spark, and use software engineering best-practices such as Git and CI/CD
      • Support the deployment of data science and ML projects into production
        • Note: You will not be developing machine learning models or algorithms

      Who you are

      We are building a diverse team of talented and motivated people who deeply understand business problems and enjoy solving them. You are a self-starter who loves working with data to build analytical tools that business users can leverage daily to do their jobs better. You are passionate about contributing to a growing team and establishing best practices.

      As a Principal Data Engineer, you will also be expected to be able to own and manage your client engagement, take part in the development of our practice, aid in business development, and contribute innovative ideas and initiatives to our company.

      • Degree-educated in Computer Science, Engineering, Mathematics, or equivalent experience
      • Strong written and verbal communication skills
      • Experience managing stakeholders and collaborating with customers
      • Experience working with cloud data warehouses (Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse)
      • Experience working with ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.)
      • Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes)
      • 5+ years working with relational databases and query languages
      • 5+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data
      • 5+ years data modeling (e.g. star schema, entity-relationship)
      • 3+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages is desirable
      • Expertise in software engineering concepts and best practices and/or DevOps is desirable
      • Experience working with big data technologies (Spark, Hadoop) is desirable
      • Experience preparing data for analytics and following a data science workflow to drive business results is desirable

        See more jobs at Aimpoint Digital

        Apply for this job