Travis Cucore

  • Edelman Financial Engines

    I don't talk about my current role or projects I've worked on in my current role publicly given the potential for this information to be used in social engineering attacks. I didn't always do this, but advances in Generative "AI" and related technologies paired with the highly regulated nature of where I work and the data I work with is enough to favor caution over self promotion. There's enough in the rest of my professional history and blog posts for you to get a good sense of what I am capable of. If you believe you have an interesting or compelling opportunity I should consider, please schedule time with me via the "Schedule Meeting" button at the top of the page, contact me via LinkedIn, or send an email to the address listed in the resume accessible via the "View Resume" button at the top of the page. Thanks for understanding.

    Projects

      As stated in the employer description. I do not publicly discuss projects for my current employer. Thanks for understanding.
  • Night Owl Technology Services

    As a business owner, Travis had to learn a different side of the consulting business. He worked with a lawyer to formalize Night Owl Holdings, LLC and the DBA he did business under (Nigh Owl Technology Services). He procured and implemented the technology stack he would need to conduct business (Salesforce, GitLab, M365, Equipment, etc…) and of course won his first bit of C2C work for an energy company looking to add traction to their new Salesforce implementation.

    Projects

    • Sales Cloud - Post Implementation Improvements

      Technical Consultant | Full Stack Software Engineer

      Energy

      This client was struggling with user adoption, and eager to complete some business critical automation, reporting, and integrations to help drive that adoption. The goal was to harmonize how Salesforce and Netsuite worked together which was sometimes difficult due to the overlaps in product and pricing centered functionality. After speaking with end-users, and their leadership, Travis set out to make changes that would improve user adoption, org stability, and implement the process automation the client wanted.

      • Improving User Adoption - Travis proposed standardizing how and where data is shown in the Salesforce UI and the client agreed. Doing so would reduce congative load which would in-turn improve adoption. To that end, Travis refactored the UI in high traffic areas to standardize content and composition making it easier for users to find what they need. These changes were well received resulting in improved adoption.
      • Improving Org Stability - When Travis started, the client had zero tests running and therefor did not know if their code was behaving as it should, or if they just hadn't hit an edge case in production yet. With buyin from his client, he wrote functional/unit/integration tests exercising business critical process, existing apex, flows, (where possible), and triggers ensuring they were run as part of the CI job he wrote to automate deployment.
      • Git and CI/CD Implementation - When Travis met with the client before being selected for the work needed, he asked about their version control and when they responded there was non suggested he implement git, pipeline automation, and quality gates as part of his work should he be selected. Once selected he promptly set out to implement these things as he gathered requirements for the other work he needed to do.
      • Improving Communication to End Users - To improve how changes were communicated with end-users, Travis wrote some scripts to publish a Salesforce bell notification with a link to the release notes when changes were pushed to production. Release notes were just a bulleted list of the work done linking to JIRA tickets. End Users could scan the bullets for anything they had interest in and quickly find out what changed. This was specifically to address cases where earlier communications had not filtered down far enough internally.
      • Business Critical Process - Travis noticed alot of friction around how Quotes were created and negotiatged internally. He proposed an app to centralize communication and encourage open collaboration with business defined quality gates (approval processes). To accomplish this, he worked with end-users and leadership to drive concenseus, and define a process. This process was implemented with screen flows as the client had explicitlly stated a preference for declarative tools. The app built around the flow was a basic chatter/events arangement with an outlook integration surfacing realtime communications related to a quote in the email history.
      • Improving the Object/Data model - The client appeared to have alot of evolutionary leftovers in the data model. There might be 4 fields with the effectively the same name but differing data-types. Relationships were sometimes pointed in the wrong direction, and Objects appeared to be duplicated as well. This was causing problems due to the amount of back and fort associated with sorting out which was the right one to use, and would likely cause problems down the line for the ongoing integration work around these objects and fields. To get ahead of this, Travis audited each of the objects he knew he was going to be working with and worked with the client to decide how to clean each object up making sure they understood the problem that was being solved, and that scope was limited to prevent to much impact to work in flight and there was more work to be done when they could spare some time to see it done.

    • Experimental Trigger Framework Implementing a Centralized Dispatcher Pattern

      Mad Scientest | Glutten for Punishment

      Internal | Research and Development

      Travis had seen and implemented the dispatcher pattern a few times in his career, and thought he'd experiment with reducing the pattern to a single dispatcher class by leveraging dynamic class instantiation. He thought this would allow for better telemetry logging for performance profiling and diagnositic purposes as well as eliminate the need to implement a dispatcher for every object with a trigger handler. While he was able to implement the pattern and show that it works, he noticed that it was only viable if certian conditions could be avoided because the pattern relies on standardizing the naming convention of trigger handler classes which means if an object API name is of max length, it would be impossible to name an Apex class for the object with a suffix as the resultant classname would exceed the maximum allowable length. He plans to write about this in his blog at some point. That said, the takeaways are described in broad strokes below.

      • Because Saleforce object API names can be long enough to preclude the addion of "handler" or some other standard suffix to be used for dynamic class instantiation, there would be edge cases that need to be avoided to ensure a standardized handler class name does not exceed the maximum lenghth for an Apex class. While this edge case is possible, it is highly unlikely to present frequently, if at all and would be nearly impossible to get into production.
      • While he's never been asked to, or needed to write a trigger/handler for an object in a foreign namespace, it is possible to do so. This case is more complicated as it further reduces the acceptable length of an object api to account for the namespace prefix or an abbreviation of it. While unlikely, collisions could be possible if two objects with the same api name from differnt namespaces present if the namespace is shortened. Because this extreemely unlikely to be a desirable trait, it is better to ignore this case and treat it as off limits until the base case (local namespace) is more fully baked.
      • For the following reasons, the issues mentioned related to the base case (local namespace) can be more or less ignored so long as you've considered the likelyhood your org would max out the length of an object api name or identified that is already the case.
        • Salesforce will not let you deploy an Apex class with a name exceeding the maximum length.
        • If by some chance Salesforce did allow such a class to be deployed, it is it would likely be cought before promoting beyond their own org as the handler they wrote would not execute as it could not be instantiated. Salesforce docs state that classes with exceptionally long names are shortened internally and the resultant name is not visible to the outside world.
        • Should additional assurances be needed, it would be trivial to run a script across all handler classes in the manifest checking the length of the name and failing the deployment if to long.
        • Utilizing the maximum object api name length would likely be considered questionable in the first place.
      • Centralizing the dispatcher reduces the testable surface area and reduces complexity. Developers won't need to write a new dispatcher for every new object requiring a trigger, and therefor, the won't need to write new tests. That's a win-win proposition.
      • Centralizing the dispatcher allows for better logging of telemetry and performance monitoring. Cacheing a sliding window of executing trigger telemetry could be exceptionally valuable for profileing and troubleshooting. Especially for orgs with excessive trigger recursion brough on by cyclic business logic.

    • Profiling Performance Gains Using WebAssembly with Lightning Web Components

      Curious Cat

      Internal | Research and Development

      Travis wrote about this experiment in a blog post. While the takeaways are described in broad strokes below, you are encouraged to read the full writeup as it provides everything you need to reproduce this work yourself including source, and instructions on how to build and deploy it yourself. That said, he wondered if he could compile Rust to WebAssembly and see a significant enough performance boost from a speed of execution perspective to warrant using it in a production setting. The short answer is absolutely it does, and it absolutely can be in certian situations.

      Screenshot 2025-03-29 201520.png

      • It is important to reduce the surface area of api exposed through JS bindings due to the overhead related to calling your WASM through them.
      • Because of the performance overhead related to calling WASM through JS bindings, there's a floor for the amount of work for which you can gain meaningful perfromance and that floor depends on nothing you control. (Available resource on the machine, current load, etc...)
      • Using WASM with your Lightning Web Components is best suited for cases where:
        • You need to process large amounts of data
        • Using Apex would not be fast enough. (Consider describe calls and how long it takes to chunk through that data when you need to). In this case, you want to get your data via apex, lds, the graphapi, etc.. and send it directly to WASM without processing.
      • The bindings can be bi-directions. That is to say, you can call JS from your WASM binary so long as you correctly generate the binding. (Beware of the associated overhead)
      • You can use any language with a compiler capable of targeting "web" (WebAssembly), but I'd strongly suggest you choose something with reasonably good memmory safety features (I like Rust, so I used that).

  • Slalom | Global Digital Services

    At Slalom, Travis started with supporting chat projects, but quickly moved on to more interesting work leveraging NLP enabled Einstein bots, Service Cloud, the Salesforce Mobile SDK, Embedded Services, as well as novel off-platform approaches to solve for the unique problems his clients brought him. Twice weekly he would hold office hours that served as a judgement free place to get help for project work or just get an informal design/code review. When many consultants were on the bench, he planned and delivered sessions to skill-up less experienced developers, which was well received. When Generative AI (the first iteration of public Chat GPT) was good enough to do anything interesting with, Travis saw the oppor-tunity and started exercising the model to better understand the business value it might rep-resent. He learned basic prompt engineering patterns like Chain-of-Though and Tree-of-Thought as well as Retrieval Augmented Generation (RAG Architecture) among other things. His contributions to Slalom's Generative AI Framework for Salesforce are being sold into market today.

    Projects

    • Salesforce | Off Platform Pub/Sub (Avoiding Salesforce Entitlements)

      Solution Architect | Software Engineer

      Auto Rental

      A large auto rental company sought to reduce the 'swivel chair' problem in their call center, where agents were required to reference information across three screens from several external applications. Travis' team (part of a comprehensive program) was tasked with the integration of Salesforce with RingCentral.

      The business objective was to enable agents to swiftly confirm the identity of callers by retrieving information from the Salesforce backend utilizing data provided by RingCentral. He initially proposed leveraging the Salesforce CRUD API to insert records and trigger platform events. This approach would log interactions for compliance purposes and broadcast a system-wide message via the Salesforce event bus (Platform Events). The client, however, expressed concerns about unexpected costs related to Salesforce platform entitlements.

      Consequently, Travis suggested an alternative off-platform solution leveraging AWS IoT Core, which brough with it the added benefit of facilitating easier and more economical integration of subscribers from various platforms.

      • Designed and proposed a solution leveraging the Salesforce CRUD API and Salesforce Platform Events, which was not pursued due to the associated costs of platform event entitlements.
      • Devised and recommended an alternative off-platform publish/subscribe model utilizing the MQTT protocol, backed by AWS IoT Core.
      • Selected AWS as the service provider given the client's existing relationship with AWS and the fact their usage would consistently remain within the free tier, avoiding additional costs.
      • Prepared comprehensive documentation for both the preferred and alternative solutions, including a less conventional fallback option for the client's Technical Advisory review board.
      • Authored the MQTT client using the paho-mqtt library due to its compliance with the Salesforce Lightning Locker Service and its permissible licensing for commercial application. If the client had been using Lightning Web Security, the AWS SDK would have worked.
      • Integrated the MQTT client with the clients service console to implement desired UI.
      • Provided detailed technical documentation describing solution architecture and operation.

    • CI/CD Updates (Github)

      Team Lead | DevOps Engineer

      Internal - Repository Restructure and Pipeline Automation

      The Global Digital Services team wanted to update repository structure and introduce automation to more effectively manage internal assets in an effort to reduce the learning curve and friction involed with deploying assets from the library for less technical team members.

      The primary challenge was configuring the pipeline to handle a variety of metadata subsets consistently. This range included an Experience Cloud site, a chatbot, various code segments, their associated documentation, tests, and deployment scripts.

      Leading two developers, Travis and team successfully developed a robust pipeline with practical quality checks, which stands out for its sophistication in the context of Salesforce projects allowing the deployment of individual assets and featuring scripts capable of populating a new development environment with data while preserving relationships within a given data-set.

      He also introduced the concept of tokenization for metadata that typically varies during the deployment process as it moves through the pipeline, which is especially relevant when deploying community sites, embedded services and the like.

      Furthermore, the continuous integration (CI) process now automatically manages a series of activations and configurations, eliminating the occorance of developers forgetting to activate things after deployment.

      • Guided a collaborative team effort, enhancing skills in Salesforce data exports with relationship preservation and BitBucket pipeline automation.
      • Designed a new branching structure that not only supported seamless pipeline automation, but accommodated forking for client-tailored adjustments to assets.
      • Set up new Salesforce environments for Development, Integration, and Stable, with the Stable environment dedicated to asset and capability demonstration.
      • Integrated quality gates within the pipeline to enhance code quality using Apex PMD.
      • Implemented JWT authentication to Salesforce using SFDX and environment secrets, reducing script duplication.
      • Updated Apex classes to align with new quality gates and testing requirements.
      • Created seed scripts to efficiently populate new development environments with necessary data.
      • Tokenized metadata that changes between orgs, such as admin emails on experience sites.

    • Authenticated, NLP Enabled Transactional Chat

      Software Engineer | Documentation Writer

      Telecom (Internet Service Provider)

      A large provider of broadband internet services wanted to get started with NLP enabled transactional chat bots (Einstein) and an authenticated user experience allowing their users to conduct account level business through chat without having to interact with a human effectively improving deflection of calls/chats routed to humans in a call-center.

      • Integrated wtih GTM objects to get privelaged user information needed for identifiying the user in Salesforce and as a flag to know they are authenticated.
      • Designed and built an application allowing an admin to confgure a UI form element similar to a details page or highlights panel by selecting data and identifying where it shoud be placed. This is useful in circumstances when the data you need on page is reachable by relationship, but not very close, or requires multiple queries.
      • Added to internal Asset Library after passing requested peer review.

    • Leading AI Solution Integration & Training for Client Success

      Technical Instructor/Solution Architect

      Health and Life Sciences (Generative AI - Object Summarization)

      This engagement was unique in Travis' experience, as the client was more interested in educating themselves and understading the value proposition given some painpoints they wanted to discuss as possible candidates.

      He led the initiative by providing expert insights on Generative AI, offering specialized training, and showcasing its practical applications through the development of a Proof of Concept (PoC). Although not ready for production due to the eight-week timeframe, the PoC was comprehensive and required only minor adjustments to meet peer review standards and be promoted to production.

      The PoC, dubbed Generalized Object Summarization, is an easily configurable extension to the Slalom Generative AI Framework for Salesforce. With minimal effort, any admin capable of writing a SOQL query can declaratively construct a prompt similar to how email templating is done.

      The PoC proved to be a key educational tool offering something concrete and relevant that participants could engage with and understand. Training started with an in-depth look at Retrieval Augmented Generation (RAG) and the system-level architecture of our framework, followed by practical sessions on utilizing the PoC. Once participants grasped the overarching thought process, we delved into prompt engineering, concentrating on Chain of Thought (CoT) and Skeleton of Thought (CoT) patterns.

      The work Travis did on this project is currently being sold into the market as part of Slalom's Generative AI Framework for Salesforce.

      • Designed, developed, and deployed the Generalized Object Summarization feature for the Slalom Generative AI Framework, adding the ability for any admin capable of writing a SOQL query to declaratively construct a prompt similar to how email templating is done.
      • Authored comprehensive internal documentation and successfully advanced the feature into our production Asset Library following a rigorous peer review.
      • Created and delivered detailed documentation on the application's architecture and operational procedures for client reference and use including some ideas on how to extend it to gain additional value.
      • Conducted in-depth discovery interviews with diverse company stakeholders to identify potential future use cases and to gain insight into their requirements, expectations, and concerns.
      • Facilitated weekly training sessions for the client's technical team members, fostering knowledge transfer and skill development in Generative AI.
      • Presented new framework feature to sales team to explain what it is, how it works, and where it can be used. Fielded questions and provided documentattion as needed.

    • Authenticated Chat Experience

      Technical Lead

      State Government (ChatBot)

      Travis' first project at Slalom was a multifaceted Chatbot initiative. He was tasked with integrating high-priority features from the backlog, and ensuring solutions were feasible with input from Solution and Technical Architects. His role extended to scripting for data seeding, refactoring code to meet new API requirements, and facilitating the bot's mobility across Salesforce organizations. Additionally, Travis led the effort to design and document the onboarding process for new agencies and developed authentication mechanisms for personalized user interactions. This project laid the groundwork for future phases aimed at expanding bot and Live Agent functionalities to other agencies.

      • Worked closely with Solution Architect to ensure the viability of proposed solutions and provided Level of Effort estimates for work prioritization and scheduling.
      • Developed a script to seed complex data structures utilizing advanced import/export capabilities of SFDX (Now SF).
      • Refactored the chatbot and associated Apex code to align with updated API requirements and facilitate bot deployments.
      • Enhanced bot functionality to allow seamless transfer between different Salesforce organizations.
      • Identified and seized opportunities to mentor junior team members, fostering their growth in areas of expressed interest.
      • Lead the development and documentation of a comprehensive onboarding plan for new agencies to the existing bot and Live Agent setup.
      • Authored detailed deployment instructions for the chatbot tailored to new agency integration, laying the groundwork for upcoming projects.
      • Engineered and implemented a system that uses ForgeRock authentication tokens, enabling the chatbot to deliver personalized user experiences. This involved close collaboration with ForgeRock and the client team, resulting in a bot that adapts to user login state and modifies the experience accordingly.

    • DE Team Branded Demo | Asset Library

      Sole Contributor

      Internal - Business Development

      Travis took on a personal project to improve client interactions by integrating Lightning-Out and Salesforce APIs. He single-handedly created a website with GatsbyJS, Tailwind CSS, and Hygraph, demonstrating the potential for seamless integration of Salesforce solutions into a client's existing technological landscape. The platform was designed to simplify the process for business development teams to present customized, brand-aligned demos, aimed at strengthening client trust.

      This website allowed for real-time customization of demonstrations, enabling business development to adapt presentations to match a client's branding instantly. Travis also introduced an 'asset library shopping' experience, giving clients an active role in tailoring the demo to their preferences, which showcased a dedicated approach to engaging and innovative client service.

      • Setup and configured Hygraph as a headless CMS and central location for application configuration.
      • Built website using Gatsby (built on React), Hygraph (formerly GraphCMS) and TailwindCSS.
      • Deployed site to S3 bucket.
      • Wrote and configured integrations between AWS, Salesforce, and Hygraph.
      • Setup use-case demonstration using Salesforce Einstine chatbot and a dummy website showcasing the impact of having a means of quickly branding an existing asset to match client websites.

    • Statewide Agency Bot Rollout

      Software Engineer

      State Government

      Following the initial delivery of the Chatbot to the client, Travis was engaged to design and develop tailored solutions for individual agencies as they were onboarded to the existing LiveAgent implementation and Einstein Bot. His involvement was on an as-needed basis, requiring him to rapidly acclimate to the project's evolving context to ensure the delivery of quality solutions in a timely manner

      • Set up the necessary scripts and CMDT structures for bot deployment across new sites.
      • Resolved arising issues to maintain consistent bot performance.
      • Offered technical advice to the team when needed.
      • Updated the bot's embedded script to trigger new behaviours based on certain search parameters.
      • Provided one-on-one guidance to the client's developer for a clear understanding of the bot's implementation.
      • Improved the test site by developing abstractions, reducing the time required to add new agencies.

  • LookThink

    LookThink’s core competency is building novel and impactful web apps using a more traditional web-stack. They needed someone to shore up their fledgling Salesforce practice from a skills and delivery perspective. To that end, Travis was able to ramp up quickly, effectively leveraging his experience to inform high-level project decision making and becoming a force-multiplier through interactions with Jr. developers.

    Projects

    • Householded Charitable Giving

      Technical Lead | Application Architect

      Charitable Giving

      A charitable giving non-profit focused on matching charitable funds with those willing to execute on intended use of those funds wanted to improve the match making process and give better control over the process of deciding who a households charitable funds were awarded to and allow those managing funds to assign buckets of money to members of their houshold to be distributed to causes each member cared about.

      To that end, we improved the ability of charitable fund administrators to define their own terms and work more directly with the people consuming those funds in service of their intended purpose and provided the automation, user interfaces, and integrations required to reduce friction in the process through direct, real-time collaboration while delivering the ability to templatize and customize agreements and criteria used to source potential consumers of charitable runds.

      • Worked with client stakeholders to understand the problems they needed solved.
      • Itterated on POC user experience shell with client until the workflow was what they wanted.
      • Implemented agreed upon workflow in Salesforce and demonstrated the working solution before making a final optimization pass, writing tests and deploying to production.

    • Phantom Issues | Salesforce Implemented as ETL Tool

      Fixer of Things

      Unknown

      This was less of a project than it was fixing a long-standing problem the client had. They were using Salesforce as an ETL tool. The client was importing around 100k records daily transforming the data, and exporting that data to another system. The problem they has was that a significant portion of these records were failing and nobody could figure out why.

      The issue ended up being that the data had significant overlap from day to day, and instead of upserting records, they were deleting everything when the job was complete. Unfortunately, they were unaware that records deleted were not actually deleted unless you force a hard-delete. This caused the dupe detection to fail records that had been previously imported as duplicates from the recycle bin.

      They were advised to either use a better suited (and frankly cheaper) ETL tool, or force a hard-delete of processed records when the job was done.

      • Identified and documented root cause of reported issue incliding steps required to prevent future occorences.

  • The Predictive Index

    Travis successfully drove significant change in how development work gets done at PI. He advocated for and led daily design/code reviews, moved the team away from a free-for-all system of development and deployment into a structured DevOps program where he managed the pipeline and weekly deployments using Copado, a DevOps tool for which he was the primary sponsor and very much involved with standing up the implementation, designing process and generating buy-in through effective change management.

    Projects

    • New DevOps Program

      Technical Lead | Change Agent

      SaaS | DevOps

      One of the topics discussed during his interview process was the desire for more structure on the dev team. Travis had experience with the kinds of tools the team would need and once onboarded began advocating for those tools and processes. He ultimatly got buyin from the Architect who secured funding and partnered with him to select and implement a tool-chain everyone including admins could use. A credit to the team he was working with, adoption was high to start, and while there were slips and misses here and there, the improvements in code/config quality was evident in the newfound stability prodcution was experiencing.

      • Worked with vendor (Copado) to design and implement their tools.
      • Configured Copado with quality gates and process automation to reduce friction.
      • Developed and documented formalized process with buyin from department head and team.
      • Implemented and managed a regular production release cycle.
      • Implemented and lead regular (no judgement) design/code reviews leading to significant reduction in production bugs.
      • Created and executed transition plan to ensure adoption of process and tools would be successful.

    • Realtime Feature Adoption Monitoring

      Software Engineer

      SaaS | Growing Company

      As a rapidly growing SaaS company, PI (The Predictive Index) wanted to track feature adoption by inspection of user interaction with the platform. To meet this requirement, Travis designed and built a slack integration that could be easily implemented anywhere it was needed to identify the use of a particular feature. This was made easier with the use of feature flags. This project took place before Salesforce bought Slack, so the canned integration tools didn't exist yet.

      • Designed and built defensive callout logic so the same code could run during async and sync operations. This reduced the amount of code that needed to be written and maintained.
      • Designed with scalability in mind. Built abstractions for easy reimplementation.
      • Worked with stakeholders to refine functionality as continued use lead to insights not thought of during ideation.

  • Appirio

    At Appirio, Travis designed and developed front-end experiences and back-end business logic to meet client business needs utilizing Apex, Aura Components, Lightning Web Components, and declarative platform tools like Lightning Flows and Process Builder to ensure delivered functionality was efficient, supportable and extensible.

    Projects

    • Student Bolt

      Developer

      Higher Education - (Internal - Student Bolt on the App Exchange)

      The Student Bolt is something Appirio uses as an accellerator when selling to higher education clients. The goal was to improve time to live when deploying the bolt for clients by abstraction of configuration.

      • Refactored existing code to promote modularity and scalability using Apex, internal assetized Aura Components, and custom metadata types.
      • Developed Facebook and X (formerlyTwitter) integrations to surface profile pictures and social feeds in UI while retaining configurability using custom metadata types.

    • Armored Experiences

      Developer

      Banking & Financial Services

      The next phase of an existing engagement, this project moved on to the client's Service Cloud. Travis' work centered around building out the tools and business logic needed to satisfy internal compliance and regulatory requirements as well as introducing new features to support new and evolving business processes. Much of the work included refactoring Aura Components and when possible, migrating them to Lightning Web Components. The goal was often to harden automation and front-end assets against undesired use.

      • Implemented new triggers and trigger handlers for new custom objects using one trigger per object design pattern. Used change event handlers when appropriate to reduce system load where an imidiate action was not needed.
      • Developed batchable Apex classes to groom legacy data and schedulable batches to run nightly for data synchronization providing email notifications with record-level success/fail detail in an HTML formatted table on job completion.
      • Hardened existing Aura Components against manipulation of business process for personal gain. Replaced Aura Components with Lightning Web Components where refactoring represented a similar LoE (Level of Effort).
      • Facilitated and participated in design and code review sessions for the purpose of identifying anti-patterns, bugs, and optimizations prior to promoting out of development environments.
      • Developed proofs of concept to illustrate how a feature might work to aid the client in deciding which direction to take the UI and/or business logic.

    • Merging Companies

      Developer

      Wealth Management

      Appirio was brought in to consolidate the business processes of two companies recently merged into the Salesforce org of one. The landscape was defined by one company having a mature org and the other having many custom built external tools. The goal was to bring functionality from several external systems and custom tools into Salesforce for adoption across the organization thereby streamlining user experience and reducing complexity of the overall technology stack supporting financial advisors in their daily activities.

      • Designed, developed and refactored a wide range of front-end and back-end functionality running on financial services cloud including: aura components, Lightning Web Components, batchable Apex, invocable Apex, Apex controllers, Apex triggers, Apex change event handlers, Lightning Flows and Process Builder as well as typical configuration activities as needed to support development.
      • Worked collaboratively with the client technical team to ensure functionality delivered was reliable and extensible and deployments went smoothly. The clients affinity for a well structure development cycle contributed greatly to our ability to deliver at high velocity.
      • Configured and integrated feature flags into code to enable the phased roll-out of features enabling end-users to continue work as usual while functional groups were on-boarded to use new execution paths.
      • Solutioned features to meet business needs and presented to the client technical team for approval before development work started.
      • Developed proofs of concept providing the client with an idea of how features might work and what the user experience might be like.
      • Refactored existing code to promote OOP principles when appropriate during development of related functionality.

  • IndyGo

    Travis was hired to bring the CAD/AVL technology stack into the IndyGo IT fold as its care and feeding had been contracted out for decades. The breadth of technology Travis managed and implemented during his time at IndyGo coupled with the depth of knowledge required to maintain those platforms and deliver results gave him the ability to think like an Architect and deliver as a Developer. The projects he led, and/or took part in demonstrate his ability to work well with interdisciplinary teams and communicate effectively with non-technical people ranging from front-line workers to executives.

    Projects

    • RFID Reader Remediation

      Project Lead

      Public Transit

      Having just opened a new transit center (the Julia M. Carson Transit Center) in the heart of Indianapolis, IndyGo was experiencing some unexpected behaviours related to the real-time arrival and departure data presented on signage located at each bay where riders could look up and see the next bus to arrive and when it waas expected to arrive and depart.

      Occationally, two or more bays would swap information requiring additional staff to monitor for the behavior and make sure riders got where they needed to go. Travis was tasked with root-causing the problem, formalizing his findings, and formulating a plan for remediation. Having done so, he led a team of 3 and coordinated the activities of 2 contractors to implement his remediation plan which successfully resolved the issue.

      • Performed root-cause analysis of why real-time signage would often swap information with one another. Efforts included physical and electrical inspection of RFID readers and real-time signage, inspection of vendor source code and configuration files, and proving root-cause hypothesis through testing, then reliable and repeatable demonstration of the problem on-demand.
      • Formally documented findings and recommended remediation plan.
      • Presented findings and remediation plan to executive leadership successfully gaining approval to execute on said plan.
      • Executed recommended remediation plan managing the activities of 3 other people over the course of several weeks. This required coordination of vendor activities and internal resources around scheduled arrivals at the downtown transit center to minimize impact to operations.

    • OTA Update Process Optimization

      Sponsor/Lead

      Public Transit

      OTA (Over the Air) updates were problematic as they had to be done while the vehicle was in the garage, turned on, and given time to boot in order to take an update to schedule data correctly. This process took longer than it did for the average operator to turn the vehicle on and leave the garage. For this reason, when an update included schedule information for the MDT (Mobile Data Terminal), and for whatever reason a coach operator did not wait for the update to take and the MDT to restart their rout information including on-time performace would not function as it would not have valid rout infromation for the current date. This would force the operator to interact with dispatch much more frequently and if they did not have the route memorized on-time performance would suffer significantly. Travis saw this problem, outlined a solution, worked to get buyin from the two other departments required to formalize a new process and owned the execution of that process moving forward.

      • Outlined and proposed initial process including communication plan.
      • Worked with planning and maintenance departments to drive concenceus through thoughtful discussion taking into account differing and at times conflicting interestest.
      • Produced formal documentation for agreed upon process and managed its execution in concert with leadership from other groups involved.

    • CAD/AVL Platform Refresh

      Technical Lead

      Public Transportation (CAD/AVL)

      The CAD/AVL platform deployed on rolling stock was aging and had become unreliable. It was time to modernize on-bus and supporting technologies. IndyGo had been using the same vendor for CAD/AVL technology for decades and wanted to explore other vendors as the mondernization efforts was a prime opportunity to do so. This was a very big project that touched every aspect of operations including (but not limited to) dispatch, scheduling, fare collection, real-time telemetry reporting, vehicle maintenance scheduling, and so on.

      • Worked with stakeholders at all levels of the organization to generate concenseus and document requirements used for generating and issuing an RFP (later refined after negotiations with the selected vendor)
      • Served as the technical point of contact for the new technology vendor making sure roadblocks were cleared and questions were answered quickly and correctly.
      • Represented IndyGo during acceptance testing at vendor facility brining the requirements matrix to use as a checklist instead of relying entirely on vendor test scripts. Status was communicated to the VP of technology along with a Go/No-Go recommendation.
      • Reverse engineered a proprietary implementation of the Alpha protocol (engineered vendor lock by previous vendor) by inspection of hex captured directly off the J1939 communication bus (riding on a J1708 physical layer) providing the specification to the vendor so they could interface with existing destination signs saving hundreds of thousands of dollars.
      • Documented pinouts for wiring harnesses specific to each bus model so the new vendor could correctly design their harness for plug and play implementation.

    • Google Transit Integration

      Sole Contributor

      Public Transit (Real-Time Telemetry)

      IndGo had tried from time to time to get their real-time telemetry data into a state that was good enough for Google Transit to consume so their ridership could see where buses were at any given time and what the schedule was.

      Travis was asked to tackle this proect given his other successes with thorny issues. After studying the specification, Google Transit data quality standards, and the data IndyGo was providing via protobuffer, Travis was able to resolve all outstanding issues resulting in IndyGo real-time telemetry and schedule data being accepted and published for the public (and developers) to consume on Google Transit.

      Shortly thereafter, IndyGo hosted a Hackathon where local mobile developer were invited to prototype a branded app for their ridership using Google Transit as their base.

      • Reviewed Google data quality standards to understand the target and how to identify malformed or other data that would be deemed to be of poor quality.
      • Chased down and resolved hardware issues presenting as holes in data presented in the protobuffers Google Transit would consume.
      • Worked with Planning to make changes in their software which further improve the quality of data presented in protobuffers.
      • Wrote a script to process data before writing out to the protobuffer correctly handling malformed and missing data as required by Google Transit. Remaining instances were either attributed to stretches of bus routes where telemetry could not be returned to base, or where unexpected changes to schedules could not be avoided.