Entry-Level Data Research Assistant – Analytics

Remotely
Full-time

Data Research Assistant is the nucleus of every fact-based decision our teams make. You will collect data from public and proprietary sources, validate accuracy, spot emerging patterns, and document findings—fueling dashboards, forecasts, and strategic recommendations. The position suits recent graduates seeking an entry-level analytics job that blends curiosity, critical thinking, and collaboration. Remote-first culture lets you contribute from anywhere in the United States.


Responsibilities  

- Harvest quantitative and qualitative data through web scraping, APIs, surveys, and database queries.  

- Verify datasets against multiple sources, flag discrepancies, and correct errors promptly.  

- Conduct exploratory analysis in Excel, SQL, or Python to surface basic trends and anomalies.  

- Maintain research logs, data dictionaries, and version control to ensure full traceability.  

- Prepare concise briefs, charts, and slide decks that translate findings for non-technical audiences.  

- Partner with data scientists, product analysts, and domain experts to refine research questions.  

- Track industry benchmarks across technology, finance, healthcare, retail, and media verticals.  

- Support ad-hoc requests under tight deadlines while upholding data governance standards.  


Must-have skills  

- Bachelor’s degree in Information Systems, Economics, Statistics, or related field.  

- Foundation in SQL and spreadsheet modeling (pivot tables, VLOOKUP, power query).  

- Exposure to Python or R for data manipulation—Pandas, NumPy, or tidyverse.  

- Familiarity with BI tools such as Tableau or Power BI for rapid visualization.  

- Sharp attention to detail and tireless commitment to data accuracy.  

- Clear written and verbal communication; you can explain numbers to non-analysts.  

- Ability to prioritize tasks, juggle multiple projects, and meet firm deadlines.  


Nice-to-have extras  

- Knowledge of RESTful APIs and JSON parsing.  

- Experience with web scraping frameworks (BeautifulSoup, Scrapy, Selenium).  

- Understanding of statistical concepts: regression, hypothesis testing, confidence intervals.  

- Exposure to cloud data warehouses (BigQuery, Redshift, Snowflake).  

- Familiarity with Git or other version control systems.  


What you will learn  

- Industry-grade research methodologies that stand up to peer review.  

- Best practices in data governance, privacy, and ethical sourcing.  

- Rapid iteration workflows in agile analytics squads.  

- Storytelling techniques that turn raw metrics into persuasive narratives.  

- Cross-functional collaboration with product, marketing, and executive leadership.