Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Expand
titleFebruary 7

 

  • Review: Doug Chase (Deactivated) to review and propose refinements to current SFDC product taxonomy – output = separately from how SKUs shook out, here’s how product managers think and talk about products and hierarchy (this human readable schema should be applied across user experiences (e.g. Salesforce vs. PowerBI use cases))

    • Is there a distinction between “sub-group” (SFDC) and “sub-category” (NetSuite)?

 

  • Review:Doug Chase (Deactivated) to compile current state across company

    • Review results and define next steps. How do we get consistent across the whole company?

 

  • Action: (Needs owner) Let's create an Analytics portal/table of contents thing – we have overlapping data being consumed (APIs, databases, data lake, data warehouse, excel, google sheets, punched cards, rope core memory, braille) and presented in multiple output channels (SFDC, PowerBI, NinjaCat, probably others) 

    • There should be some kind of ANALYTICS STOREFRONT concept where you can reliably:

      • Find the data, visualization, or widget you’re looking for  

      • Ask for one to be created

      • Find a list of links to reports about various topics (@aishwarya has a proto-version of this already to share)

      • Learn where the data for a given visualization comes from, and how it's derived

         

  • Review: Are there duplicated datasets between lake & warehouse for product sales data?

    • Charlotte had asked @Martial to sort this out because there was some angst somewhere about it

    • Drew’s sales data is intended to be closer to “installed” base vs. “entitled” set from SFDC

      • Ideally Drew would like to be pulling data for Layered App Penetration from…?

        Right now, this is based on manual data recorded by Drew’s team

        • We would ideally pull this from historical data stored in the lake. The main thing is to get away from manually recording it since you can't modify the filtering for past data and it requires human entry. That said, there are some challenges to pulling point in time for all the datasets... so I don't have any timeframes around tackling this.