The initial work in HCatalog has allowed users to share their data in Hadoop regardless of the tools they use and relieved them of needing to know where and how their data is stored. But there is much more to be done to deliver on the full promise of providing metadata and table management for Hadoop clusters. It should be easy to store and process semi-structured and unstructured data via HCatalog. We need interfaces and simple implementations of data life cycle management tools. We need to deepen the integration with NoSQL and MPP data stores. And we need to be able to store larger metadata such as partition level statistics and user generated metadata. This talk will cover these areas of growth and give an overview of how they might be approached.