Skip to main content
JobGet logo
caption
location-icon
Distance
job-list-card-figure
Data Engineer, Baseball Systems
Boston Red Sox
location-iconBoston MA

POSITION OVERVIEW: The Data Engineer, Baseball Systems position will be a member of the baseball operations software development team, and is responsible for integrating, collecting, processing, and storing many sources of baseball data, as well as designing and building new data solutions. This position must be comfortable with on-premises and cloud solutions, and take the initiative to explore new optimizations and cutting-edge data technologies. This individual will work closely with our data architect, analysts, developers, and other members of baseball operations. RESPONSIBILITIES: Build leading-edge baseball solutions together with the software development team, analysts, and others on new and existing baseball systems Build and maintain integration pipelines, often via an API or file-based, while also identifying areas of improvement and spending time to re-architect when required Build and maintain infrastructure to optimize extraction, transformation, and the loading of data from various sources Design, build, and maintain data warehousing solutions for the software development and analytics teams Build and maintain tools for the analysts to enable more efficient and extensive data modeling and simulation efforts Participate in key phases of the software development process of critical baseball applications, including requirements gathering, analysis, effort estimation, technical investigation, software design and implementation, testing, bug fixing, and quality assurance Actively participate with software developers and data architects in design reviews, code reviews, and other best practices Work closely at times with baseball analysts to design and implement data solutions Respond to and resolve technical problems and issues in a timely manner SKILLS & QUALIFICATIONS: TECHNICAL SKILLS Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, Statistics, Information Systems, or a related field 2-3 years of experience in a Date Engineer role Proficiency with SQL and query optimizations, stored procedures, views, and other database objects Experience building custom API integrations, interfacing with JSON, XML, and custom data structures Experience with AWS, GCP, or Azure cloud services, such as Cloud SQL, RDS, Redshift, Azure SQL, Azure SQL DW, or others Experience building data solutions using Python, C#, C++, Ruby, or other languages Experience with scheduling and workflow management platforms, such as Airflow Experience with ETL tools and pipelines from various platforms Experience with big data frameworks such as Hadoop or Spark is a plus Experience with R and RStudio is a plus GENERAL SKILLS Ability to work autonomously and as a team in a fast paced environment High level of attention to detail with the ability to multi-task effectively Comfortable working remotely using Zoom, Teams, Slack, Trello, and other tools to communicate with all team members High degree of professionalism and ability to maintain confidential information Excellent organizational and time management skills An understanding of baseball is a plus The Red Sox (or FSM) requires proof of being up-to-date on COVID-19 vaccination as a condition of employment, subject to applicable legal requirements. Up-to-date means having received all recommended COVID-19 vaccination doses in the primary series and a booster dose(s) when eligible, per CDC guidelines. Prospective employees will receive consideration without discrimination based on race, religious creed, color, sex, age, national origin, handicap, disability, military/veteran status, ancestry, sexual orientation, gender identity/expression or protected genetic information.

Part Time / Full Time
job-list-card-figure
Senior Data Engineer (US Remote)
Swyfft
location-iconBoston MA

Job DescriptionSalary: Swyfft Holdings, LLC, consists of Swyfft, LLC and Core Programs, LLC. Both are fast-growing, tech-enabled MGA’s that are disrupting the traditional insurance industry by re-imagining how you price and bind home insurance and commercial package products. From lightning-fast quotes to hassle-free claims servicing, Swyfft Holdings, LLC leverages big data to provide the very best customer service experience in the industry. We're growing, we’re expanding and we're looking for “tech-savvy” folks like you to join our team! *This position is a U.S. remote based opportunity. Some travel for team meetings and trainings may be required About the Position:As a Senior Data Engineer, you will be instrumental in supporting the development and use of our data systems. Your goal is to ensure that information flows timely and accurately to and from the organization as well as within. As a successful Senior Data Engineer will bring forth a strong understanding of databases and data analysis procedures. You are a highly technical SQL expert with strong problem-solving skills and excellent troubleshooting capabilities. In a perfect world, you might even have some previous experience working for an insurtech or an analytics measurement platform. Key Responsibilities: (What you'll be asked to do)Build efficient ways to organize, store and analyze data while maintaining availability and consistency.Create processes and enforce policies for effective data management.Formulate techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data.Devise and implement efficient and secure procedures for data handling and analysis with attention to all technical aspects.Establish rules and procedures for data sharing with upper management and external stakeholders.Support others in the daily use of data systems and ensure compliance to legal and company standards.Provide assistance with reports and data extraction when needed.Monitor and analyze information and data systems and evaluate their performance to discover ways of enhancing them (such as new technologies and upgrades). Ensure digital databases and archives are protected from security breaches and data losses.Troubleshoot data-related problems and authorize maintenance or modifications The Successful Candidate: (what we're looking for)You have a strong understanding of databases and data analysis procedures.You have an analytical mindset and strong problem-solving skills.You have excellent communication and collaboration skills.You have intense attention to detail and quality assurance.Some Requirements:Expertise in SQL (MS and PostgreSQL).Familiarity with modern database and information system technologies. Expertise in both Tableau Desktop and Server. 7+ years of experience as a data manager.Excellent understanding of data administration and management functions such as collection, analysis, and distribution.Understanding of data warehousing and star schemas.Basic familiarity with predictive analysis and data visualization techniques.Solid understanding of R and Python environment configuration and basic programming.Expert level in Microsoft Excel. Understanding of spatial database functionality is a plus.  Education: Bachelors’ degree or equivalent experience required in a related field. Advanced degrees or Certifications are a plus.  Computer Skills:You are familiar with predictive analysis and data visualization techniques using relevant tools such as: Tableau, Dataiku, R, Python.Must be proficient with MS Office and other internal insurance related programs, systems or applications. Ability to communicate effectively using programs such as Slack & MS Teams. You are comfortable sharing screens and video chatting. Other: Reliable high-speed internet connectivity required.Designated quiet work from home space. We Have a Great Benefits Package!20 days of PTO annuallyMedical, Dental, VisionShort- and Long-Term Disability (Company Paid)Life & AD&D (Company Paid)Healthcare, Dependent Care and Transit FSA401K with a generous matching contribution and no vesting schedule  It is the policy of Swyfft to provide equal employment opportunities to all employees and applicants for employment without regard to race, religion, color, ethnic origin, gender, gender identity, age, marital status, veteran status, sexual orientation, disability, or any other basis prohibited by applicable federal, state, or local law. EOE/AA/M/D/V/F. Please Note: Swyfft is not accepting 3rd party agency resumes for this position, please do not forward resumes to our careers email address or Swyfft employees. Swyfft will not be responsible for any fees related to unsolicited resumes.

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Takeda Pharmaceutical
location-iconBoston MA

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description About the role: At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide. Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team. How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines. Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc. Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies Oversee the active maintenance and improvement of current data warehouses and data pipelines. Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support. Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development. Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities. Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders. Work in a matrixed environment by leading projects using a product mindset. Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science. 4+ years of experience working in data engineering role in an enterprise environment Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Strong experience developing, working in, and maintaining production data pipelines and production data warehouses Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling. Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations Ability to read/write Python and R scripts for building data transformation pipelines Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc. Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes Ability and prior experience navigating a challenging, matrixed organization Ability to multi-task across a range of functional and technical contexts Nice-to-have's: Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping Ability to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DML Prior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, Deployment Prior experience developing documentation for a data platform Prior experience designing and implementing data architectures, including tables, views, facts, dimensions Prior experience with Data Engineering projects and teams at an Enterprise level Strong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasets Prior experience managing, overseeing and guiding junior data engineers. What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and Vision Financial Planning & Stability: 401(k) with company match and Annual Retirement Contribution Plan Health & Wellness programs including onsite flu shots and health screenings Generous time off for vacation and the option to purchase additional vacation days Community Outreach Programs and company match of charitable contributions Family Planning Support Flexible Work Paths Tuition reimbursement More about us: At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work. Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world. This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy. Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.EEO Statement Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law. LocationsBoston, MA Worker TypeEmployee Worker Sub-TypeRegular Time TypeFull time

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Takeda Pharmaceutical
location-iconBrookline MA

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description About the role: At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide. Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team. How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines. Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc. Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies Oversee the active maintenance and improvement of current data warehouses and data pipelines. Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support. Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development. Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities. Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders. Work in a matrixed environment by leading projects using a product mindset. Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science. 4+ years of experience working in data engineering role in an enterprise environment Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Strong experience developing, working in, and maintaining production data pipelines and production data warehouses Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling. Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations Ability to read/write Python and R scripts for building data transformation pipelines Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc. Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes Ability and prior experience navigating a challenging, matrixed organization Ability to multi-task across a range of functional and technical contexts Nice-to-have's: Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping Ability to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DML Prior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, Deployment Prior experience developing documentation for a data platform Prior experience designing and implementing data architectures, including tables, views, facts, dimensions Prior experience with Data Engineering projects and teams at an Enterprise level Strong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasets Prior experience managing, overseeing and guiding junior data engineers. What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and Vision Financial Planning & Stability: 401(k) with company match and Annual Retirement Contribution Plan Health & Wellness programs including onsite flu shots and health screenings Generous time off for vacation and the option to purchase additional vacation days Community Outreach Programs and company match of charitable contributions Family Planning Support Flexible Work Paths Tuition reimbursement More about us: At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work. Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world. This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy. Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.EEO Statement Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law. LocationsBoston, MA Worker TypeEmployee Worker Sub-TypeRegular Time TypeFull time

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Careerbuilder-US
location-iconBrookline MA

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.Job DescriptionAbout the role:At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide.Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team.How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines.Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc.Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updatesDefine relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepanciesOversee the active maintenance and improvement of current data warehouses and data pipelines.Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support.Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development.Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities.Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders.Work in a matrixed environment by leading projects using a product mindset.Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science.4+ years of experience working in data engineering role in an enterprise environmentEntry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics spaceStrong experience developing, working in, and maintaining production data pipelines and production data warehousesStrong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling.Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operationsAbility to read/write Python and R scripts for building data transformation pipelinesDemonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQueryDemonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc.Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysisCollaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomesAbility and prior experience navigating a challenging, matrixed organizationAbility to multi-task across a range of functional and technical contextsNice-to-have's:Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mappingAbility to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DMLPrior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, DeploymentPrior experience developing documentation for a data platformPrior experience designing and implementing data architectures, including tables, views, facts, dimensionsPrior experience with Data Engineering projects and teams at an Enterprise levelStrong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasetsPrior experience managing, overseeing and guiding junior data engineers.What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and VisionFinancial Planning & Stability: 401(k) with company match and Annual Retirement Contribution PlanHealth & Wellness programs including onsite flu shots and health screeningsGenerous time off for vacation and the option to purchase additional vacation daysCommunity Outreach Programs and company match of charitable contributionsFamily Planning SupportFlexible Work PathsTuition reimbursementMore about us:At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work.Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world.This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.#LI-Hybrid#LI-AA1EEO StatementTakeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.LocationsBoston, MAWorker TypeEmployeeWorker Sub-TypeRegularTime TypeFull time

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Careerbuilder-US
location-iconBoston MA

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.Job DescriptionAbout the role:At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide.Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team.How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines.Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc.Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updatesDefine relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepanciesOversee the active maintenance and improvement of current data warehouses and data pipelines.Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support.Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development.Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities.Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders.Work in a matrixed environment by leading projects using a product mindset.Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science.4+ years of experience working in data engineering role in an enterprise environmentEntry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics spaceStrong experience developing, working in, and maintaining production data pipelines and production data warehousesStrong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling.Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operationsAbility to read/write Python and R scripts for building data transformation pipelinesDemonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQueryDemonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc.Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysisCollaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomesAbility and prior experience navigating a challenging, matrixed organizationAbility to multi-task across a range of functional and technical contextsNice-to-have's:Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mappingAbility to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DMLPrior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, DeploymentPrior experience developing documentation for a data platformPrior experience designing and implementing data architectures, including tables, views, facts, dimensionsPrior experience with Data Engineering projects and teams at an Enterprise levelStrong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasetsPrior experience managing, overseeing and guiding junior data engineers.What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and VisionFinancial Planning & Stability: 401(k) with company match and Annual Retirement Contribution PlanHealth & Wellness programs including onsite flu shots and health screeningsGenerous time off for vacation and the option to purchase additional vacation daysCommunity Outreach Programs and company match of charitable contributionsFamily Planning SupportFlexible Work PathsTuition reimbursementMore about us:At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work.Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world.This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.#LI-Hybrid#LI-AA1EEO StatementTakeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.LocationsBoston, MAWorker TypeEmployeeWorker Sub-TypeRegularTime TypeFull time

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Takeda Pharmaceutical
location-iconHyde Park MA

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description About the role: At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide. Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team. How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines. Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc. Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies Oversee the active maintenance and improvement of current data warehouses and data pipelines. Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support. Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development. Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities. Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders. Work in a matrixed environment by leading projects using a product mindset. Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science. 4+ years of experience working in data engineering role in an enterprise environment Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Strong experience developing, working in, and maintaining production data pipelines and production data warehouses Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling. Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations Ability to read/write Python and R scripts for building data transformation pipelines Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc. Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes Ability and prior experience navigating a challenging, matrixed organization Ability to multi-task across a range of functional and technical contexts Nice-to-have's: Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping Ability to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DML Prior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, Deployment Prior experience developing documentation for a data platform Prior experience designing and implementing data architectures, including tables, views, facts, dimensions Prior experience with Data Engineering projects and teams at an Enterprise level Strong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasets Prior experience managing, overseeing and guiding junior data engineers. What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and Vision Financial Planning & Stability: 401(k) with company match and Annual Retirement Contribution Plan Health & Wellness programs including onsite flu shots and health screenings Generous time off for vacation and the option to purchase additional vacation days Community Outreach Programs and company match of charitable contributions Family Planning Support Flexible Work Paths Tuition reimbursement More about us: At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work. Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world. This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy. Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.EEO Statement Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law. LocationsBoston, MA Worker TypeEmployee Worker Sub-TypeRegular Time TypeFull time

Part Time / Full Time
job-list-card-figure
Senior Data Engineer
Takeda Pharmaceutical
location-iconAuburndale MA

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description About the role: At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients. By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide. Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team. How you will contribute: Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture. Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines. Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc. Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies Oversee the active maintenance and improvement of current data warehouses and data pipelines. Create training documentation and trains junior team members on data pipelining, CI/CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support. Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development. Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities. Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders. Work in a matrixed environment by leading projects using a product mindset. Minimum Requirements/Qualifications: Master's degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science. 4+ years of experience working in data engineering role in an enterprise environment Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Strong experience developing, working in, and maintaining production data pipelines and production data warehouses Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning/execution skills including estimating and scheduling. Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations Ability to read/write Python and R scripts for building data transformation pipelines Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc. Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes Ability and prior experience navigating a challenging, matrixed organization Ability to multi-task across a range of functional and technical contexts Nice-to-have's: Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping Ability to read/write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL/DML Prior experience in consulting or analytics project delivery through the entire software lifecyle: Requirements, Design, Testing, Deployment Prior experience developing documentation for a data platform Prior experience designing and implementing data architectures, including tables, views, facts, dimensions Prior experience with Data Engineering projects and teams at an Enterprise level Strong experience with ETL/ELT design and implementations in the context of large, disparate, and complex datasets Prior experience managing, overseeing and guiding junior data engineers. What Takeda can offer you: Comprehensive Healthcare: Medical, Dental, and Vision Financial Planning & Stability: 401(k) with company match and Annual Retirement Contribution Plan Health & Wellness programs including onsite flu shots and health screenings Generous time off for vacation and the option to purchase additional vacation days Community Outreach Programs and company match of charitable contributions Family Planning Support Flexible Work Paths Tuition reimbursement More about us: At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs. Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work. Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do. We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world. This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy. Base Salary Range: $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well. Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.EEO Statement Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law. LocationsBoston, MA Worker TypeEmployee Worker Sub-TypeRegular Time TypeFull time

Part Time / Full Time
job-list-card-figure
Data Architect
Zifo RnD Solutions
location-iconCambridge MA

Job DescriptionData ArchitectCurious about this position?Does the prospect of combining change management, cloud, and technology transformation – to help our BioPharma customers become cloud-native, digital enterprises excite you? Do you have a background in digital transformation and data architecture? If yes, you may be our next Data Architect!Working to enable R&D Digital Transformation, you will be responsible for ideating, designing, deploying scientific informatic data models. You will own and lead design of data architecture and provide recommendations on best practices and data standardsOur customers are scientific informatics landscape to accelerate the discovery and manufacture of new therapeutics, drugs and vaccines to save lives faster!   Consequently, we are rapidly growing and looking to expand our technical team in North America. This is an exciting opportunity to join our digital team and help leading pharmaceutical and biotech companies deploy and support cutting edge scientific R&D technology globally.   Curious about your responsibilities?Build enterprise data strategy to help R&D Informatics organization achieve their objectivesProvide consulting inputs to develop guidelines & provide design ideasDevelop business, information, and technical artifacts in relation to data architectureResearch, assess & provide recommendations on technology platforms & standards and best practicesHelp customers estimate, plan, and prioritize data architecture effortsDeliver guidance and oversight for project teamsFacilitate stakeholder discussions and help obtain approvals from senior leadershipBuild relationship with customer, internal and vendor leadership teamTrain, mentor, and guide Zifo data modelers and BSAs and technical engineersHands-on involvement in developing data models and architecture for critical engagementsWe are curious about YOU! We get excited about Data Architects with:Problem solving & analytical thinking and decision making skillsProven experience with designing and deploying enterprise data architectureProven experience with commonly used technology solutions supporting data exchange, data governance, data storage, data analytics & visualizationProven experience with cloud-based RDBMS, data lakes, data warehouses, big data, object storage platformsProven experience migrating data from legacy databases to modern platformsPlugged-in to emerging commercial and open-source technologies & solutionsGood understanding of concepts such as data integrity, data modelling, ETL/ELT, data flow diagrams, BPMN, knowledge graphs, triple stores, ontology development & data ScienceStrong understanding of established and evolving industry standards and best practices (FAIR) on data managementStrong collaborations skills with the ability to manage several tasks simultaneously and a self-starting, proactive approach.Understanding of SDLC processes /methodologies and associated toolsA successful Zifo-ite is:Independent, Self-Motivated & Results drivenWilling & able to quickly acquire new Technical Skills & Business PrinciplesA critical thinker who possesses logical reasoning Curious and always looking for creative solutions to complex problemsWhat you bring to the table:Experience with scientific informatics solutionsBackground in chemistry or biology ScienceExperience in Life Sciences industryExperience with bio ontologiesExperience with change management aspects of technology transformation with large organizationsExperience with regulatory and compliance requirements for R&DCuriosity to learn and passion to join the vanguard driving evolution in scientific informaticsA passion for ensuring customers (new and existing) have an amazing experience when they interact with Zifo.Able to relay, complex information in simple terms to internal, partner and customer stakeholders with different levels of technical knowledge.Confident and effective in communicating across teams, management tiers and with customers both in writing and verbally.What we bring to the table:CURIOSITY DRIVEN, SCIENCE FOCUSED, EMPLOYEE BUILT. Our culture is unlike any other, one where we debate, challenge ourselves, and interact with all alike. We are a curious bunch, characterized by our passion to learn and spirit of teamwork. Zifo is a global R&D solutions provider focused on the industries of Pharma, Biotech, Manufacturing QC, Medical Devices, specialty chemicals and other research-based organizations. Our team's knowledge of science and expertise in technology help Zifo better serve our customers around the globe, including 7 of the Top 10 Biopharma companies.We look for Science – Biotechnology, Pharmaceutical Technology, Biomedical Engineering, Microbiology etc. We possess scientific and technical knowledge and bear professional and personal goals. While we have a "no doors" policy to promote free access within, we do have a tough door to walk in. We search with a two-point agenda – technical competency and cultural adaptability.We offer a competitive compensation package including accrued vacation, medical, dental, vision, 401k with company matching, life insurance, and flexible spending accounts.If you share these sentiments and are prepared for the atypical, then Zifo is your calling!Zifo is an equal opportunity employer, and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Part Time / Full Time
job-list-card-figure
Data Architect / Technical Project Manager (845)
Axtria, Inc.
location-iconBoston MA

Job DescriptionIntroduction:Axtria is a global provider of cloud software and data analytics to the Life Sciences industry. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives.Since our founding in 2010, technology innovation has been our winning differentiation, and we continue to leapfrog competition with platforms that deploy Artificial Intelligence and Machine Learning. Our cloud-based platforms - Axtria DataMax™, Axtria InsightsMax™, Axtria SalesIQ™, and Axtria MarketingIQ™ - enable customers to efficiently manage data, leverage data science to deliver insights for sales and marketing planning and manage end-to-end commercial operations.With customers in over 75 countries, Axtria is one of the largest global commercial solutions providers in the Life Sciences industry. We continue to win industry recognition for growth and are featured in some of the most aspirational lists - INC 5000, Deloitte FAST 500, NJBiz FAST 50, SmartCEO Future 50, Red Herring 100, and several other growth and technology awards.Axtria is looking for exceptional talent to join our rapidly growing global team. People are our biggest perk! Our transparent and collaborative culture offers a chance to work with some of the brightest minds in the industry. Axtria Institute, our in-house university, offers the best training in the industry and an opportunity to learn in a structured environment. A customized career progression plan ensures every associate is setup for success and able to do meaningful work in a fun environment. We want our legacy to be the leaders we produce for the industry. Will you be next?We are currently hiring for a Technical Project Manager/ Data Architects having experience with Pharmaceutical Data Sets for our key Life Science clients. Success in this role will require experience working on SQL, Data Warehousing, Business Intelligence, and Testing.Responsibilities: 10+ years in executing large size complex cross functional programs for Commercial operation needs (e.g. deployment of large-scale commercial ops platforms, global/ multi-country deployment experience is a plus)Design & Build Cloud infrastructure, and data management processes to move the organization to a more sophisticated, Agile, and robust target state data architecture.Develop systems that ingest, cleanse, and normalize diverse datasets, develop ETL/ELT data pipelines from various internal and external data sources.Develop understanding of how data will flow & get stored through an organization across multiple applications.Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing data processing capabilities.Develop POCs to influence platform architects, Business Stakeholders, and software engineers to validate solution proposals and migrate.Work with data and analytics experts to strive for greater functionality in data systems.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Identify data engineering needs and implement optimal solutions.Automation of data management processesMigration of on-premise systems to cloud using Big Data TechnologiesProject planning & execution using Agile/waterfall methodologies.Process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Enable Data Scientist & Business users to run their processes efficientlyBuild environment to support development, System Testing, UAT and Production, cycle phases using SDLC methodologies.Manage and maintain the overall project tasks and timeline. Ensure all project milestone dates are agreed upon and achieved.Manage overall schedule and risks.Self-starter who can work independently.Excellent at both verbal and written communications (must be able to represent Axtria and lead discussions with both technical and business leaders in the organization).Experience working on Pharma/Life Sciences projects will be a plusExperience in deploying and managing various Snowflake services.Required Skills & ExperienceBachelor's degree or above.Should have expertise and working experience in at least 2 ETL tools among Informatica, Matillion, Talend, Data Factory, Data Bricks.Should have expertise and working experience in at least 2 DBMS/appliances among Snowflake, Redshift, Netezza, SQL Server, Oracle.Should have strong Data Warehousing and Data Integration fundamentals.Advanced expertise with SQL.Aware of techniques such as: Data Modelling, Performance tuning and regression testing.Should have experience in pharma commercial concepts and data sets.Proficient in SQL, SAS, R, Python or equivalent.Understanding of cloud-based technology capabilities and limitations.Good to have exposure to Snowflake cloud and its service offerings.Familiarity with KPIs for pharma sales and marketingExperience with Pharma commercial data like Sales, Claims, Payers, Specialty pharmacyUnderstanding of Pharma commercial operations concepts like alignments, targeting, reporting, incentivesExcellent verbal and written communication skills.Logistics and Location: -We are hiring for Boston, MA location.U.S. Citizens and those authorized to work in the U.S. are encouraged to apply.Flexibility to travel and/or relocate within the US as per project requirements.#LI-GS2Powered by JazzHR8S17dRheiR

Part Time / Full Time
job-list-card-figure
Real Estate Data Scientist - TEMP- Remote
Harbor Freight Tools
location-iconBoston MA

The Real Estate Data Scientist (Temp) will report to the Sr Manager of Real Estate Data Science. The Real Estate Data Scientist will help discover the information hidden in vast amounts of data and help make smarter decisions to open successful Harbor Freight stores. This individual’s primary focus will be applying data mining techniques, doing statistical analysis, and building high-quality prediction systems to evaluate individual stores and markets. To be successful in this role, this individual will need a deep understanding of customer data, advanced statistical modeling techniques, and big data. In addition to data analysis, there will be time for visiting markets, performing additional project-based analyses, and department-level reporting as required to facilitate Harbor Freight’s growth. Based upon successful performance in this role, this position can be a stepping stone to either further roles in real estate data science, real estate strategy, or a number of other highly analytical roles within the organization such as Data Analytics, Corporate Strategy, and Merchandising Finance to name a few.Duties and Responsibilities:Provide analytical support for various initiatives including, but not limited to:Digging into Harbor Freight’s customers through statistical modeling on demographic, SKU level and financial data to develop a customer profile known as a “core customer”.Leverage SKU/Category transaction data and demographic/geographic data to understand and model the drivers of sales of different product categories across different stores.Monthly database mining to provide insight on customer behavior by region/market/SKU etc. for various departments.Understand the demand potential for incremental stores as well as the risk inherent in the brand’s growth strategies for individual targets and at the market level.Various reporting requirements on trade areas and customer information.Assist with the development and implementation of national demographic market planning and site evaluation systems and processes.Participate in cross-functional ad hoc projects ranging from understanding underperforming regions, to how changes in company strategy impact where we open stores to better educating business partners on the impact of opening stores to name a few, and present results in a clear manner.Requirements:Education(Required):A Bachelor’s degree with credentials in Data Science, Mathematics, Statistics, Economics, or similar.Experience(Required):Direct experience in real estate strategy or 3+ year experience in Data Analytics, CRM or Inventory Forecasting with the majority of that experience in an analytical role.Skills (Required & Preferred):Familiarity with retail and/or consumer products experience preferred.Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naïve Bayes, SVM, Decision Forests, etc.Good applied statistics skills such as distributions, statistical testing, regression, etc.Experience with common data science toolkits such as R, or Python. Excellence in at least one is highly desirable.Experience with geospatial data and Geographic Information Systems a plus.Great communication skills.Experience with data visualization tools such as Tableau or PowerBI preferred.Proficiency in using query languages like SQL.Considerable experience with Microsoft Office (advanced Excel and intermediate Access and PowerPoint skills) and other financial software/systems.Superior analytical and quantitative skills, particularly in strategic business and financial analysis, driven by an intellectual curiosity to understand and explain relationships in data.Strong ability to translate data from different sources and distill analyses into clear action plans.Experience in CRM, Market Planning, Site Selection, and Market optimization preferred.Experience with traveling for work on a frequent basis.Experience and comfort working in a fast-paced business environment with the ability to consistently meet tight deadlines.#LI-PG1

Part Time / Full Time
job-list-card-figure
Senior Technology Lead, FTR Trading
Boston Energy Trading and Marketing LLC
location-iconBoston MA

Job DescriptionBoston Energy Trading & Marketing (BETM) is looking for a Senior Technology Lead to lead the FTR Technology team. The successful candidate will possess a strong programming and quantitative background with a passion for solving business problems with technology. This position is a critical hands-on role working directly with the FTR trading team and is responsible for developing, supporting, and maintaining a trading, and analytics solutions supported by APIs, applications, and visualization tools. The role requires a strong ability to work independently while collaborating with the FTR desk and other IT teams at BETM. This position will report to the head of IT Team and will be located either in our Boston, MA or Houston TX office.Responsibilities:Work directly with FTR traders and FTR analysts to understand existing platform, trading models, toolkits & datasets. Develop software to support commercial activity across key market drivers (supply vs demand, power & gas transmission and storage, weather, cross-commodity correlations, etc.)Spearhead requirement gathering, designing and development activities to build and enhance models/algorithms, data collectors, pipelines, data storage and reporting solutions for power flow, economic dispatch, and power price simulations.Develop cloud-native solutions to promote integration within the IT framework across various North American commodity markets, including energy, capacity, ancillary services and power & gas basis markets in the major ISO regions (PJM, CAISO, NYISO, MISO, SPP, ISO-NE, ERCOT, IESO, AESO)Manage, troubleshoot and enhance trading models, data collectors and data stores leveraged by FTR trading team across transmission power flows, power generation economic dispatch, automated monitoring, analysis of transmission system constraints and statistical simulation based on historical data. The role requires proactive analytical thinking to identify opportunities to enhance trading strategy, systems and drive change by investigating and evaluating new datasets and third-party tools for potential use in new or existing trading algorithms; support research and innovation through the creative and aggressive experimentation with cutting-edge software, processes, procedures, and methods.As the front office lead manages business deliverables by collaborating with other cross- functional IT teams to support FTR trade capture, financial and risk analytics, risk limits monitoring, collateral management, and risk reporting. Ensure strategic solution design and facilitate prioritization across competing priorities.Adhere to standard software and model development processes such as requirements definition, prototyping, issue tracking, source code control and compliance with Risk and IT procedures and requirements.Qualifications:Bachelor’s or higher degree in Computer Science, Mathematics, Physics, Quantitative Finance, Engineering, Statistics or equivalent field, with significant coursework or equivalent experience in programming, modeling, simulation and statistical techniques.10+ years of professional programming experience across Power or other Commodities technology teamsExperience working in a front office, supporting a quantitative analytics environment is a must. Proficiency in Python, , R, Matlab, C++, .Net, Java or comparable programming environment using modern data structures, analysis, and visualization tools (Pandas, NumPy, Scikit-learn, Highcharts, etc.)Experience working in modern cloud-based environment utilizing DevOps framework tools. Familiarity with Microsoft Azure tools is a plusExperience using traditional and cloud-based database systems - Relational SQL and NoSQL, to managing large data sets supporting analytics and regression. Working knowledge in Un*x and Windows environments.Ability to communicate and interact with a wide range of users ranging from very technical to non-technical.Results-oriented team player, with the ability to handle a rapidly changing set of projects and priorities while maintaining a strong professional presence.Strong analytical skillset with demonstrated attention to detail.The candidate should be passionate about technology, staying current with trends.Fast learner with the ability to adapt quickly and work in a dynamic environment.Excellent time management skills.Desired Experience:Hands-on professional programming experience across power industry and power trading is a big plus. Knowledge of US ISO power markets and FTR trading (PJM, CAISO, NYISO, MISO, SPP, ERCOT) strongly preferred.Experience with industry-standard power transmission analysis & data platforms (Powerworld, DAYZER, UPLAN, OPF models, YES Energy, Genscape, Panorama/MUSE).Experience with DevOps practices, including CI/CD, and infrastructure-as-code.Experience with the following technologies: Git, Containers (Docker), Kubernetes, Microservices, shell scripting, messaging bus.BETM is committed to a drug and alcohol-free workplace. To the extent permitted by law, employees are subject to periodic random drug testing, and post-accident and reasonable suspicion drug and alcohol testing. EOE AA M/F/Protected Veteran Status/Disability.BETM provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.EEO is the Law Poster - Click HereOfficial description on file with Human Resources. Level, Title and/or Salary may be adjusted based on the applicant's experience or skills.

Part Time / Full Time
job-detail-figure
Data Engineer, Baseball Systems
share-icon
Part Time / Full Time
location-iconBoston MA
Job Description
POSITION OVERVIEW: The Data Engineer, Baseball Systems position will be a member of the baseball operations software development team, and is responsible for integrating, collecting, processing, and storing many sources of baseball data, as well as designing and building new data solutions. This position must be comfortable with on-premises and cloud solutions, and take the initiative to explore new optimizations and cutting-edge data technologies. This individual will work closely with our data architect, analysts, developers, and other members of baseball operations. RESPONSIBILITIES: Build leading-edge baseball solutions together with the software development team, analysts, and others on new and existing baseball systems Build and maintain integration pipelines, often via an API or file-based, while also identifying areas of improvement and spending time to re-architect when required Build and maintain infrastructure to optimize extraction, transformation, and the loading of data from various sources Design, build, and maintain data warehousing solutions for the software development and analytics teams Build and maintain tools for the analysts to enable more efficient and extensive data modeling and simulation efforts Participate in key phases of the software development process of critical baseball applications, including requirements gathering, analysis, effort estimation, technical investigation, software design and implementation, testing, bug fixing, and quality assurance Actively participate with software developers and data architects in design reviews, code reviews, and other best practices Work closely at times with baseball analysts to design and implement data solutions Respond to and resolve technical problems and issues in a timely manner SKILLS & QUALIFICATIONS: TECHNICAL SKILLS Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, Statistics, Information Systems, or a related field 2-3 years of experience in a Date Engineer role Proficiency with SQL and query optimizations, stored procedures, views, and other database objects Experience building custom API integrations, interfacing with JSON, XML, and custom data structures Experience with AWS, GCP, or Azure cloud services, such as Cloud SQL, RDS, Redshift, Azure SQL, Azure SQL DW, or others Experience building data solutions using Python, C#, C++, Ruby, or other languages Experience with scheduling and workflow management platforms, such as Airflow Experience with ETL tools and pipelines from various platforms Experience with big data frameworks such as Hadoop or Spark is a plus Experience with R and RStudio is a plus GENERAL SKILLS Ability to work autonomously and as a team in a fast paced environment High level of attention to detail with the ability to multi-task effectively Comfortable working remotely using Zoom, Teams, Slack, Trello, and other tools to communicate with all team members High degree of professionalism and ability to maintain confidential information Excellent organizational and time management skills An understanding of baseball is a plus The Red Sox (or FSM) requires proof of being up-to-date on COVID-19 vaccination as a condition of employment, subject to applicable legal requirements. Up-to-date means having received all recommended COVID-19 vaccination doses in the primary series and a booster dose(s) when eligible, per CDC guidelines. Prospective employees will receive consideration without discrimination based on race, religious creed, color, sex, age, national origin, handicap, disability, military/veteran status, ancestry, sexual orientation, gender identity/expression or protected genetic information.
job-detail-figure
Data Engineer, Baseball Systems
share-icon
Part Time / Full Time
location-iconBoston MA
Job Description
POSITION OVERVIEW: The Data Engineer, Baseball Systems position will be a member of the baseball operations software development team, and is responsible for integrating, collecting, processing, and storing many sources of baseball data, as well as designing and building new data solutions. This position must be comfortable with on-premises and cloud solutions, and take the initiative to explore new optimizations and cutting-edge data technologies. This individual will work closely with our data architect, analysts, developers, and other members of baseball operations. RESPONSIBILITIES: Build leading-edge baseball solutions together with the software development team, analysts, and others on new and existing baseball systems Build and maintain integration pipelines, often via an API or file-based, while also identifying areas of improvement and spending time to re-architect when required Build and maintain infrastructure to optimize extraction, transformation, and the loading of data from various sources Design, build, and maintain data warehousing solutions for the software development and analytics teams Build and maintain tools for the analysts to enable more efficient and extensive data modeling and simulation efforts Participate in key phases of the software development process of critical baseball applications, including requirements gathering, analysis, effort estimation, technical investigation, software design and implementation, testing, bug fixing, and quality assurance Actively participate with software developers and data architects in design reviews, code reviews, and other best practices Work closely at times with baseball analysts to design and implement data solutions Respond to and resolve technical problems and issues in a timely manner SKILLS & QUALIFICATIONS: TECHNICAL SKILLS Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, Statistics, Information Systems, or a related field 2-3 years of experience in a Date Engineer role Proficiency with SQL and query optimizations, stored procedures, views, and other database objects Experience building custom API integrations, interfacing with JSON, XML, and custom data structures Experience with AWS, GCP, or Azure cloud services, such as Cloud SQL, RDS, Redshift, Azure SQL, Azure SQL DW, or others Experience building data solutions using Python, C#, C++, Ruby, or other languages Experience with scheduling and workflow management platforms, such as Airflow Experience with ETL tools and pipelines from various platforms Experience with big data frameworks such as Hadoop or Spark is a plus Experience with R and RStudio is a plus GENERAL SKILLS Ability to work autonomously and as a team in a fast paced environment High level of attention to detail with the ability to multi-task effectively Comfortable working remotely using Zoom, Teams, Slack, Trello, and other tools to communicate with all team members High degree of professionalism and ability to maintain confidential information Excellent organizational and time management skills An understanding of baseball is a plus The Red Sox (or FSM) requires proof of being up-to-date on COVID-19 vaccination as a condition of employment, subject to applicable legal requirements. Up-to-date means having received all recommended COVID-19 vaccination doses in the primary series and a booster dose(s) when eligible, per CDC guidelines. Prospective employees will receive consideration without discrimination based on race, religious creed, color, sex, age, national origin, handicap, disability, military/veteran status, ancestry, sexual orientation, gender identity/expression or protected genetic information.