SlideShare uma empresa Scribd logo
1 de 4
Baixar para ler offline
 
MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO • APRIL 24−26
Perforce White Paper
To provide a solid foundation for software development
excellence in today s demanding economy, it s critical
to have a software version management solution that
can meet your demands.
Extracting Depot Paths into New
Instances of Their Own
Mark Warren, NVIDIA
2 Extracting Depot Paths into New Instances of Their Own
	
  
INTRODUCTION
As Perforce instances are used over time, they naturally grow in file and metadata size. New
files are submitted and metadata increases in size and the instance becomes unwieldy. At
some point normal operations require table locks so long that all users are affected. To help
mitigate this problem, we can increase hardware performance but there is a limit to what
hardware you can replace. A more practical method to release the building metadata pressure
is to move select datasets to their own instance/depot.
Perfsplit1
I s a tool developed by Perforce that extracts a section of a depot from an existing
Perforce database. It will access a Perforce server’s database directly and is platform and
release specific. Perfsplit does not remove data from the source Perforce server but does copy
archive files from it. Perfsplit is a good tool for this operation but does not resolve some of
these problems:
• The need for zero downtime. Most instances that are in need of splitting have a very
large user base. The need to keep instances up and running is compounded by the
number of users unable to access their instance once this process in initiated.
• Perfsplit does not rename the new instance depot. This is undesirable because it can
be confusing to users having the same depot name across multiple instances.
• The need to use “p4 snap” to copy lazy integrated files to their physical lbr location. p4
snap can considerably increase the size of the original depot depending on the size of
the area we are splitting off.
This white paper is intended to give guidelines on a method to resolve all these issues.
Preparation	
  
To make sure we gather a complete dataset for migration from a live instance, it’s necessary to
prevent users from making changes to the path(s) we are splitting. With super access rights,
this can be done by simply restricting read-write access to this path and only allowing read-
only. This restriction ensures that the metadata structure we are splitting off will be up to date.
Once this is done, we need to create a checkpoint of the instance to gather lbr records and we
need to have a running instance of this checkpoint for Perfsplit use.
Despite the inadequacies Perfsplit has, this process makes use of it; Perfsplit is necessary to
build the foundation of the new instance. The key function of Perfsplit is using a map file to
direct it to the select path(s) to extract. Because we are splitting not only the initial path(s) but
also the integration history, we will need to append this dataset to the splitmap. To get this, we
need to grep from the newly created original instances checkpoint the lbrFile record defined in
db.rev2
of all files associated with the depot path we are splitting. The lbrFile filename specifies
where in the archives the file containing the revision may be found.
For example:
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1
http://ftp.perforce.com/perforce/tools/perfsplit/perfsplit.html
2
http://www.perforce.com/perforce/doc.current/schema/#db.rev
3 Extracting Depot Paths into New Instances of Their Own
	
  
grep @db.rev@ /checkpoint.XXX | grep //targeted/path/to/split/
This will give you the db.rev entries for the path you want to split. From these entries you pull
the lbrFile column and remove all entries referring to the original path. This will give you the
location of all lazy integrated files.
Because we are not making use of the p4 snap feature, we will need to add these paths to the
splitmap (mapping) files already containing the path(s) we are splitting from the original depot.
Transition	
  
Once we have this mapping, we can begin our split using Perfsplit with the minimum options,
source, output, and splitmap file, but we also need an additional (undocumented) option “–a” to
skip the Perfsplit archive file copy step. This will build a duplicate instance of the original
metadata for all depots associated with the original split path in the output path. Because we
don’t want two instances with depots of the same name, we need to take another checkpoint of
this new instance.
Conversion	
  
With this new checkpoint, we can shape the metadata into a new data structure. To do this, we
build another instance from the newly created checkpoint, but during creation (replay) we
make some substitutions to point the current data structure to what we want.
For example, to convert file paths from depot “foo” to depot “bar,” use the following commands:
cat <checkpoint_file> | sed –e s#//foo/path/#//bar/path/# | p4d -r $p4root -f -jr –
Now we have a new instance with the correct metadata.
Connection	
  
The conversion now points the original metadata to a new depot area. We will need to create
this new depot “bar” to access this area. This new depot needs to be pointed to the split files.
There are a number of options for the depot files. Depending on your situation, you can copy
the files from the original location, leave them in the original location and symlink the new
depot to it, or move them to a new location and then symlink from the original depot location. In
every situation, it is important to make sure the original depot does not have write access to
these files.
Once this is done, you will have a new instance with a different name containing a complete
data structure of split files.
Verification	
  
Verification of the new instance should be run to test the success of the transfer. Only two
errors can occur from a verify:
4 Extracting Depot Paths into New Instances of Their Own
	
  
• Verification returns a "BAD" error. This is reported when the MD5 digest of a file
revision stored in the Perforce database differs from the one calculated by the “p4
verify” command. This error indicates that the file revision might be corrupted. This is
most likely due to changes to the physical files during transfer. Otherwise, files should
be confirmed by someone familiar with them or by diffing them from the original.3
• Verification returns a “MISSING” error. This indicates that Perforce cannot find the
specified revisions within the versioned file tree. Most likely this means the archive file
is missing from the versioned file tree. Check the lbrFile record of this file and make
sure that this file is in its correct location, that the new instance can access this
location, and that this file’s location was part of the splitmap.4
Cleanup	
  
If you added paths to the splitmap to capture the lazy integrated files, these depots/files will be
accessible in the new instance. These are necessary for the new instance to locate these files
but can make the new instance look cluttered because they are not part of the original
intended split path. Because these paths are only for the instance to locate and not for user
interaction, these extra depots/files can be removed/hidden from user view by restricting their
view in the protection table. This will make the new instance look like it only has the intended
split depot path and still allow the instance access.
Completion	
  
By implementing these steps using Perfsplit, the issues regarding zero downtime, duplicate
naming, and integration history are addressed. Resolution of these issues makes Perfsplit a
more desirable tool in a large installation environment.
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
3
http://answers.perforce.com/articles/KB_Article/How-to-Handle-p4-verify-BAD-Errors
4
http://answers.perforce.com/articles/KB_Article/MISSING-errors-from-p4-verify

Mais conteúdo relacionado

Mais procurados

Hadoop Performance Optimization at Scale, Lessons Learned at Twitter
Hadoop Performance Optimization at Scale, Lessons Learned at TwitterHadoop Performance Optimization at Scale, Lessons Learned at Twitter
Hadoop Performance Optimization at Scale, Lessons Learned at Twitter
DataWorks Summit
 
Analytical Queries with Hive: SQL Windowing and Table Functions
Analytical Queries with Hive: SQL Windowing and Table FunctionsAnalytical Queries with Hive: SQL Windowing and Table Functions
Analytical Queries with Hive: SQL Windowing and Table Functions
DataWorks Summit
 
Ibm tivoli storage manager how to migrate the library manager function redp0140
Ibm tivoli storage manager how to migrate the library manager function redp0140Ibm tivoli storage manager how to migrate the library manager function redp0140
Ibm tivoli storage manager how to migrate the library manager function redp0140
Banking at Ho Chi Minh city
 

Mais procurados (20)

Big data interview questions and answers
Big data interview questions and answersBig data interview questions and answers
Big data interview questions and answers
 
Introducing JDBC for SPARQL
Introducing JDBC for SPARQLIntroducing JDBC for SPARQL
Introducing JDBC for SPARQL
 
Assignment 2 Theoretical
Assignment 2 TheoreticalAssignment 2 Theoretical
Assignment 2 Theoretical
 
Unit 4 lecture-3
Unit 4 lecture-3Unit 4 lecture-3
Unit 4 lecture-3
 
Hadoop Performance Optimization at Scale, Lessons Learned at Twitter
Hadoop Performance Optimization at Scale, Lessons Learned at TwitterHadoop Performance Optimization at Scale, Lessons Learned at Twitter
Hadoop Performance Optimization at Scale, Lessons Learned at Twitter
 
Introduction to Hbase
Introduction to Hbase Introduction to Hbase
Introduction to Hbase
 
03 pig intro
03 pig intro03 pig intro
03 pig intro
 
Analytical Queries with Hive: SQL Windowing and Table Functions
Analytical Queries with Hive: SQL Windowing and Table FunctionsAnalytical Queries with Hive: SQL Windowing and Table Functions
Analytical Queries with Hive: SQL Windowing and Table Functions
 
Hadoop interview questions
Hadoop interview questionsHadoop interview questions
Hadoop interview questions
 
Ibm tivoli storage manager how to migrate the library manager function redp0140
Ibm tivoli storage manager how to migrate the library manager function redp0140Ibm tivoli storage manager how to migrate the library manager function redp0140
Ibm tivoli storage manager how to migrate the library manager function redp0140
 
Oracle Golden Gate Bidirectional Replication
Oracle Golden Gate Bidirectional ReplicationOracle Golden Gate Bidirectional Replication
Oracle Golden Gate Bidirectional Replication
 
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix clusterFive major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
 
Apache Hbase Architecture
Apache Hbase ArchitectureApache Hbase Architecture
Apache Hbase Architecture
 
Apache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce TutorialApache Hadoop MapReduce Tutorial
Apache Hadoop MapReduce Tutorial
 
Cloning 2
Cloning 2Cloning 2
Cloning 2
 
Session 04 pig - slides
Session 04   pig - slidesSession 04   pig - slides
Session 04 pig - slides
 
Hbase Quick Review Guide for Interviews
Hbase Quick Review Guide for InterviewsHbase Quick Review Guide for Interviews
Hbase Quick Review Guide for Interviews
 
Hadoop architecture by ajay
Hadoop architecture by ajayHadoop architecture by ajay
Hadoop architecture by ajay
 
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
 
Hadoop Interview Question and Answers
Hadoop  Interview Question and AnswersHadoop  Interview Question and Answers
Hadoop Interview Question and Answers
 

Destaque

[SAP] Perforce Administrative Self Services at SAP
[SAP] Perforce Administrative Self Services at SAP[SAP] Perforce Administrative Self Services at SAP
[SAP] Perforce Administrative Self Services at SAP
Perforce
 
[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture
Perforce
 
[NetApp] Simplified HA:DR Using Storage Solutions
[NetApp] Simplified HA:DR Using Storage Solutions[NetApp] Simplified HA:DR Using Storage Solutions
[NetApp] Simplified HA:DR Using Storage Solutions
Perforce
 
Infographic: Perforce vs ClearCase
Infographic: Perforce vs ClearCaseInfographic: Perforce vs ClearCase
Infographic: Perforce vs ClearCase
Perforce
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction
Perforce
 
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
Perforce
 
[MathWorks] Versioning Infrastructure
[MathWorks] Versioning Infrastructure[MathWorks] Versioning Infrastructure
[MathWorks] Versioning Infrastructure
Perforce
 
[NetApp Managing Big Workspaces with Storage Magic
[NetApp Managing Big Workspaces with Storage Magic[NetApp Managing Big Workspaces with Storage Magic
[NetApp Managing Big Workspaces with Storage Magic
Perforce
 
[Lucas Films] Using a Perforce Proxy with Alternate Transports
[Lucas Films] Using a Perforce Proxy with Alternate Transports[Lucas Films] Using a Perforce Proxy with Alternate Transports
[Lucas Films] Using a Perforce Proxy with Alternate Transports
Perforce
 
[Mentor Graphics] A Perforce-based Automatic Document Generation System
[Mentor Graphics] A Perforce-based Automatic Document Generation System[Mentor Graphics] A Perforce-based Automatic Document Generation System
[Mentor Graphics] A Perforce-based Automatic Document Generation System
Perforce
 
[Citrix] Perforce Standardisation at Citrix
[Citrix] Perforce Standardisation at Citrix[Citrix] Perforce Standardisation at Citrix
[Citrix] Perforce Standardisation at Citrix
Perforce
 
Infographic: Perforce vs Subversion
Infographic: Perforce vs SubversionInfographic: Perforce vs Subversion
Infographic: Perforce vs Subversion
Perforce
 

Destaque (20)

[SAP] Perforce Administrative Self Services at SAP
[SAP] Perforce Administrative Self Services at SAP[SAP] Perforce Administrative Self Services at SAP
[SAP] Perforce Administrative Self Services at SAP
 
Granular Protections Management with Triggers
Granular Protections Management with TriggersGranular Protections Management with Triggers
Granular Protections Management with Triggers
 
[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture
 
[NetApp] Simplified HA:DR Using Storage Solutions
[NetApp] Simplified HA:DR Using Storage Solutions[NetApp] Simplified HA:DR Using Storage Solutions
[NetApp] Simplified HA:DR Using Storage Solutions
 
Infographic: Perforce vs ClearCase
Infographic: Perforce vs ClearCaseInfographic: Perforce vs ClearCase
Infographic: Perforce vs ClearCase
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction
 
From ClearCase to Perforce Helix: Breakthroughs in Scalability at Intel
From ClearCase to Perforce Helix: Breakthroughs in Scalability at IntelFrom ClearCase to Perforce Helix: Breakthroughs in Scalability at Intel
From ClearCase to Perforce Helix: Breakthroughs in Scalability at Intel
 
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
[AMD] Novel Use of Perforce for Software Auto-updates and File Transfer
 
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...
 
Managing Microservices at Scale
Managing Microservices at ScaleManaging Microservices at Scale
Managing Microservices at Scale
 
[MathWorks] Versioning Infrastructure
[MathWorks] Versioning Infrastructure[MathWorks] Versioning Infrastructure
[MathWorks] Versioning Infrastructure
 
[NetApp Managing Big Workspaces with Storage Magic
[NetApp Managing Big Workspaces with Storage Magic[NetApp Managing Big Workspaces with Storage Magic
[NetApp Managing Big Workspaces with Storage Magic
 
Continuous Validation
Continuous ValidationContinuous Validation
Continuous Validation
 
How Continuous Delivery Helped McKesson Create Award Winning Applications
How Continuous Delivery Helped McKesson Create Award Winning ApplicationsHow Continuous Delivery Helped McKesson Create Award Winning Applications
How Continuous Delivery Helped McKesson Create Award Winning Applications
 
[Lucas Films] Using a Perforce Proxy with Alternate Transports
[Lucas Films] Using a Perforce Proxy with Alternate Transports[Lucas Films] Using a Perforce Proxy with Alternate Transports
[Lucas Films] Using a Perforce Proxy with Alternate Transports
 
[Mentor Graphics] A Perforce-based Automatic Document Generation System
[Mentor Graphics] A Perforce-based Automatic Document Generation System[Mentor Graphics] A Perforce-based Automatic Document Generation System
[Mentor Graphics] A Perforce-based Automatic Document Generation System
 
[Citrix] Perforce Standardisation at Citrix
[Citrix] Perforce Standardisation at Citrix[Citrix] Perforce Standardisation at Citrix
[Citrix] Perforce Standardisation at Citrix
 
Cheat Sheet
Cheat SheetCheat Sheet
Cheat Sheet
 
Infographic: Perforce vs Subversion
Infographic: Perforce vs SubversionInfographic: Perforce vs Subversion
Infographic: Perforce vs Subversion
 
Conquering Chaos: Helix & DevOps
Conquering Chaos: Helix & DevOpsConquering Chaos: Helix & DevOps
Conquering Chaos: Helix & DevOps
 

Semelhante a [Nvidia] Extracting Depot Paths Into New Instances of Their Own

[Pixar] Templar Underminer
[Pixar] Templar Underminer[Pixar] Templar Underminer
[Pixar] Templar Underminer
Perforce
 
dylibencapsulation
dylibencapsulationdylibencapsulation
dylibencapsulation
Cole Herzog
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobs
shanker_uma
 
CASPUR Staging System II
CASPUR Staging System IICASPUR Staging System II
CASPUR Staging System II
Andrea PETRUCCI
 

Semelhante a [Nvidia] Extracting Depot Paths Into New Instances of Their Own (20)

[Pixar] Templar Underminer
[Pixar] Templar Underminer[Pixar] Templar Underminer
[Pixar] Templar Underminer
 
dylibencapsulation
dylibencapsulationdylibencapsulation
dylibencapsulation
 
Genomics Is Not Special: Towards Data Intensive Biology
Genomics Is Not Special: Towards Data Intensive BiologyGenomics Is Not Special: Towards Data Intensive Biology
Genomics Is Not Special: Towards Data Intensive Biology
 
Deployment with ExpressionEngine
Deployment with ExpressionEngineDeployment with ExpressionEngine
Deployment with ExpressionEngine
 
A General Purpose Extensible Scanning Query Architecture for Ad Hoc Analytics
A General Purpose Extensible Scanning Query Architecture for Ad Hoc AnalyticsA General Purpose Extensible Scanning Query Architecture for Ad Hoc Analytics
A General Purpose Extensible Scanning Query Architecture for Ad Hoc Analytics
 
White Paper: Using Perforce 'Attributes' for Managing Game Asset Metadata
White Paper: Using Perforce 'Attributes' for Managing Game Asset MetadataWhite Paper: Using Perforce 'Attributes' for Managing Game Asset Metadata
White Paper: Using Perforce 'Attributes' for Managing Game Asset Metadata
 
Operating system lab manual
Operating system lab manualOperating system lab manual
Operating system lab manual
 
White Paper: Scaling Servers and Storage for Film Assets
White Paper: Scaling Servers and Storage for Film AssetsWhite Paper: Scaling Servers and Storage for Film Assets
White Paper: Scaling Servers and Storage for Film Assets
 
OS Lab Manual.pdf
OS Lab Manual.pdfOS Lab Manual.pdf
OS Lab Manual.pdf
 
Writing and using php streams and sockets tek11
Writing and using php streams and sockets   tek11Writing and using php streams and sockets   tek11
Writing and using php streams and sockets tek11
 
Enjoying the Journey from Puppet 3.x to Puppet 4.x (PuppetConf 2016)
Enjoying the Journey from Puppet 3.x to Puppet 4.x (PuppetConf 2016)Enjoying the Journey from Puppet 3.x to Puppet 4.x (PuppetConf 2016)
Enjoying the Journey from Puppet 3.x to Puppet 4.x (PuppetConf 2016)
 
Instructions for using the phase wrapping and unwrapping code
Instructions for using the phase wrapping and unwrapping codeInstructions for using the phase wrapping and unwrapping code
Instructions for using the phase wrapping and unwrapping code
 
PuppetConf 2016: Enjoying the Journey from Puppet 3.x to 4.x – Rob Nelson, AT&T
PuppetConf 2016: Enjoying the Journey from Puppet 3.x to 4.x – Rob Nelson, AT&T PuppetConf 2016: Enjoying the Journey from Puppet 3.x to 4.x – Rob Nelson, AT&T
PuppetConf 2016: Enjoying the Journey from Puppet 3.x to 4.x – Rob Nelson, AT&T
 
Migraine Drupal - syncing your staging and live sites
Migraine Drupal - syncing your staging and live sitesMigraine Drupal - syncing your staging and live sites
Migraine Drupal - syncing your staging and live sites
 
Upgrading hadoop
Upgrading hadoopUpgrading hadoop
Upgrading hadoop
 
Migration from 8.1 to 11.3
Migration from 8.1 to 11.3Migration from 8.1 to 11.3
Migration from 8.1 to 11.3
 
Datastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobsDatastage parallell jobs vs datastage server jobs
Datastage parallell jobs vs datastage server jobs
 
Lecture-20.pptx
Lecture-20.pptxLecture-20.pptx
Lecture-20.pptx
 
White Paper: Perforce Administration Optimization, Scalability, Availability ...
White Paper: Perforce Administration Optimization, Scalability, Availability ...White Paper: Perforce Administration Optimization, Scalability, Availability ...
White Paper: Perforce Administration Optimization, Scalability, Availability ...
 
CASPUR Staging System II
CASPUR Staging System IICASPUR Staging System II
CASPUR Staging System II
 

Mais de Perforce

Mais de Perforce (20)

How to Organize Game Developers With Different Planning Needs
How to Organize Game Developers With Different Planning NeedsHow to Organize Game Developers With Different Planning Needs
How to Organize Game Developers With Different Planning Needs
 
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...
Regulatory Traceability:  How to Maintain Compliance, Quality, and Cost Effic...Regulatory Traceability:  How to Maintain Compliance, Quality, and Cost Effic...
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...
 
Efficient Security Development and Testing Using Dynamic and Static Code Anal...
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Efficient Security Development and Testing Using Dynamic and Static Code Anal...
Efficient Security Development and Testing Using Dynamic and Static Code Anal...
 
Understanding Compliant Workflow Enforcement SOPs
Understanding Compliant Workflow Enforcement SOPsUnderstanding Compliant Workflow Enforcement SOPs
Understanding Compliant Workflow Enforcement SOPs
 
Branching Out: How To Automate Your Development Process
Branching Out: How To Automate Your Development ProcessBranching Out: How To Automate Your Development Process
Branching Out: How To Automate Your Development Process
 
How to Do Code Reviews at Massive Scale For DevOps
How to Do Code Reviews at Massive Scale For DevOpsHow to Do Code Reviews at Massive Scale For DevOps
How to Do Code Reviews at Massive Scale For DevOps
 
How to Spark Joy In Your Product Backlog
How to Spark Joy In Your Product Backlog How to Spark Joy In Your Product Backlog
How to Spark Joy In Your Product Backlog
 
Going Remote: Build Up Your Game Dev Team
Going Remote: Build Up Your Game Dev Team Going Remote: Build Up Your Game Dev Team
Going Remote: Build Up Your Game Dev Team
 
Shift to Remote: How to Manage Your New Workflow
Shift to Remote: How to Manage Your New WorkflowShift to Remote: How to Manage Your New Workflow
Shift to Remote: How to Manage Your New Workflow
 
Hybrid Development Methodology in a Regulated World
Hybrid Development Methodology in a Regulated WorldHybrid Development Methodology in a Regulated World
Hybrid Development Methodology in a Regulated World
 
Better, Faster, Easier: How to Make Git Really Work in the Enterprise
Better, Faster, Easier: How to Make Git Really Work in the EnterpriseBetter, Faster, Easier: How to Make Git Really Work in the Enterprise
Better, Faster, Easier: How to Make Git Really Work in the Enterprise
 
Easier Requirements Management Using Diagrams In Helix ALM
Easier Requirements Management Using Diagrams In Helix ALMEasier Requirements Management Using Diagrams In Helix ALM
Easier Requirements Management Using Diagrams In Helix ALM
 
How To Master Your Mega Backlog
How To Master Your Mega Backlog How To Master Your Mega Backlog
How To Master Your Mega Backlog
 
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...
 
How to Scale With Helix Core and Microsoft Azure
How to Scale With Helix Core and Microsoft Azure How to Scale With Helix Core and Microsoft Azure
How to Scale With Helix Core and Microsoft Azure
 
Achieving Software Safety, Security, and Reliability Part 2
Achieving Software Safety, Security, and Reliability Part 2Achieving Software Safety, Security, and Reliability Part 2
Achieving Software Safety, Security, and Reliability Part 2
 
Should You Break Up With Your Monolith?
Should You Break Up With Your Monolith?Should You Break Up With Your Monolith?
Should You Break Up With Your Monolith?
 
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...
 
What's New in Helix ALM 2019.4
What's New in Helix ALM 2019.4What's New in Helix ALM 2019.4
What's New in Helix ALM 2019.4
 
Free Yourself From the MS Office Prison
Free Yourself From the MS Office Prison Free Yourself From the MS Office Prison
Free Yourself From the MS Office Prison
 

Último

Último (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 

[Nvidia] Extracting Depot Paths Into New Instances of Their Own

  • 1.   MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO • APRIL 24−26 Perforce White Paper To provide a solid foundation for software development excellence in today s demanding economy, it s critical to have a software version management solution that can meet your demands. Extracting Depot Paths into New Instances of Their Own Mark Warren, NVIDIA
  • 2. 2 Extracting Depot Paths into New Instances of Their Own   INTRODUCTION As Perforce instances are used over time, they naturally grow in file and metadata size. New files are submitted and metadata increases in size and the instance becomes unwieldy. At some point normal operations require table locks so long that all users are affected. To help mitigate this problem, we can increase hardware performance but there is a limit to what hardware you can replace. A more practical method to release the building metadata pressure is to move select datasets to their own instance/depot. Perfsplit1 I s a tool developed by Perforce that extracts a section of a depot from an existing Perforce database. It will access a Perforce server’s database directly and is platform and release specific. Perfsplit does not remove data from the source Perforce server but does copy archive files from it. Perfsplit is a good tool for this operation but does not resolve some of these problems: • The need for zero downtime. Most instances that are in need of splitting have a very large user base. The need to keep instances up and running is compounded by the number of users unable to access their instance once this process in initiated. • Perfsplit does not rename the new instance depot. This is undesirable because it can be confusing to users having the same depot name across multiple instances. • The need to use “p4 snap” to copy lazy integrated files to their physical lbr location. p4 snap can considerably increase the size of the original depot depending on the size of the area we are splitting off. This white paper is intended to give guidelines on a method to resolve all these issues. Preparation   To make sure we gather a complete dataset for migration from a live instance, it’s necessary to prevent users from making changes to the path(s) we are splitting. With super access rights, this can be done by simply restricting read-write access to this path and only allowing read- only. This restriction ensures that the metadata structure we are splitting off will be up to date. Once this is done, we need to create a checkpoint of the instance to gather lbr records and we need to have a running instance of this checkpoint for Perfsplit use. Despite the inadequacies Perfsplit has, this process makes use of it; Perfsplit is necessary to build the foundation of the new instance. The key function of Perfsplit is using a map file to direct it to the select path(s) to extract. Because we are splitting not only the initial path(s) but also the integration history, we will need to append this dataset to the splitmap. To get this, we need to grep from the newly created original instances checkpoint the lbrFile record defined in db.rev2 of all files associated with the depot path we are splitting. The lbrFile filename specifies where in the archives the file containing the revision may be found. For example:                                                                                                                 1 http://ftp.perforce.com/perforce/tools/perfsplit/perfsplit.html 2 http://www.perforce.com/perforce/doc.current/schema/#db.rev
  • 3. 3 Extracting Depot Paths into New Instances of Their Own   grep @db.rev@ /checkpoint.XXX | grep //targeted/path/to/split/ This will give you the db.rev entries for the path you want to split. From these entries you pull the lbrFile column and remove all entries referring to the original path. This will give you the location of all lazy integrated files. Because we are not making use of the p4 snap feature, we will need to add these paths to the splitmap (mapping) files already containing the path(s) we are splitting from the original depot. Transition   Once we have this mapping, we can begin our split using Perfsplit with the minimum options, source, output, and splitmap file, but we also need an additional (undocumented) option “–a” to skip the Perfsplit archive file copy step. This will build a duplicate instance of the original metadata for all depots associated with the original split path in the output path. Because we don’t want two instances with depots of the same name, we need to take another checkpoint of this new instance. Conversion   With this new checkpoint, we can shape the metadata into a new data structure. To do this, we build another instance from the newly created checkpoint, but during creation (replay) we make some substitutions to point the current data structure to what we want. For example, to convert file paths from depot “foo” to depot “bar,” use the following commands: cat <checkpoint_file> | sed –e s#//foo/path/#//bar/path/# | p4d -r $p4root -f -jr – Now we have a new instance with the correct metadata. Connection   The conversion now points the original metadata to a new depot area. We will need to create this new depot “bar” to access this area. This new depot needs to be pointed to the split files. There are a number of options for the depot files. Depending on your situation, you can copy the files from the original location, leave them in the original location and symlink the new depot to it, or move them to a new location and then symlink from the original depot location. In every situation, it is important to make sure the original depot does not have write access to these files. Once this is done, you will have a new instance with a different name containing a complete data structure of split files. Verification   Verification of the new instance should be run to test the success of the transfer. Only two errors can occur from a verify:
  • 4. 4 Extracting Depot Paths into New Instances of Their Own   • Verification returns a "BAD" error. This is reported when the MD5 digest of a file revision stored in the Perforce database differs from the one calculated by the “p4 verify” command. This error indicates that the file revision might be corrupted. This is most likely due to changes to the physical files during transfer. Otherwise, files should be confirmed by someone familiar with them or by diffing them from the original.3 • Verification returns a “MISSING” error. This indicates that Perforce cannot find the specified revisions within the versioned file tree. Most likely this means the archive file is missing from the versioned file tree. Check the lbrFile record of this file and make sure that this file is in its correct location, that the new instance can access this location, and that this file’s location was part of the splitmap.4 Cleanup   If you added paths to the splitmap to capture the lazy integrated files, these depots/files will be accessible in the new instance. These are necessary for the new instance to locate these files but can make the new instance look cluttered because they are not part of the original intended split path. Because these paths are only for the instance to locate and not for user interaction, these extra depots/files can be removed/hidden from user view by restricting their view in the protection table. This will make the new instance look like it only has the intended split depot path and still allow the instance access. Completion   By implementing these steps using Perfsplit, the issues regarding zero downtime, duplicate naming, and integration history are addressed. Resolution of these issues makes Perfsplit a more desirable tool in a large installation environment.                                                                                                                 3 http://answers.perforce.com/articles/KB_Article/How-to-Handle-p4-verify-BAD-Errors 4 http://answers.perforce.com/articles/KB_Article/MISSING-errors-from-p4-verify