TRM helps financial institutions, crypto businesses and federal agencies detect and investigate crypto-related financial crimes.
This company operates with a remote-first culture, allowing team members to work from anywhere.
TRM Labs is pioneering the fight against cryptocurrency fraud and financial crime. You'll join a lean, high-impact team tackling critical global challenges, from human trafficking to terrorist financing. TRM's products empower financial institutions, crypto businesses, and government agencies like PayPal, Visa, the FBI, and IRS to monitor blockchain transactions and identify illicit activity. The company has raised over $79M from leading investors like JPMorgan Chase and Visa. By blending blockchain data with advanced analytics and machine learning, TRM Labs provides actionable intelligence to secure the evolving financial landscape for billions globally.
TRM Labs is a remote-first company, intentionally building a culture that thrives across time zones and continents. They prioritize clear communication, thorough documentation, and meaningful relationships through tools like Slack, Loom, and Notion. You will find a focus on deep work, with prioritization taken seriously to allow you to concentrate on high-impact opportunities. Feedback is immediate and applied on the spot, fostering a culture of continuous improvement. TRM runs fast, expecting ownership, clarity, and follow-through. If something takes months elsewhere, it often ships here in days. They coach directly, assume positive intent, and play for the front of the jersey.
TRM Labs engineers tackle complex challenges in data engineering, data science, and threat intelligence. You'll work on building foundational data infrastructure, architecting modern data lakehouses for petabyte-scale data, and optimizing real-time data pipelines. The team leverages cutting-edge tools and frameworks like Apache Spark, Trino, Hudi, Iceberg, and Snowflake. They prioritize speed, high standards, and distributed ownership. Expect to build scalable engines, identify ways to compress timelines using the 80/20 principle, and deliver observable dashboards quickly. The focus is on iterating rapidly and solving ambiguous, cross-functional problems from end to end.