Quick overview of what we’re going to cover
End up with a demo showing modifying a java build pack
Just to recap on how apps get deployed to CF
Stage app done by the DEA, pulling uploaded app bits plus build packs from the blobstore, compiles into droplets
This shows the process the DEA uses as it goes through the available buildpacls to work out what is needed to support execution of the uploaded code
app + metadata + buildpack compiled into droplet and uploaded to containers in DEAs
This slide goes into more detail about what is container within a droplet. Note that we rely on the DEA to provide the base OS, but the droplet contains everything we need to execute the uploaded app
Highlight link between buildpack, DEA, and application
container here refers to something like Tomcat as a Java application container - OS container provided by DEA to isolate all of this
Three types of build pack - admin and BYO allow you to override or curate your own
Maximum flexibility while still maintaining control over platform
Simple commands via the cf CLI to create, update and delete build packs
Uploaded build packs have precedence over admin buildpacks, which have precedence over system ones
Git != GitHub - use any public Git provider or local Git repositories, enables local curation and control to comply with security/audit/compliance issues if you have them
Git repository must be visible on the network that CF is running on - public CF requires public Git repo, private CF can use private or public Git repo
All community-contributed and maintained - there’s a wide range of support for lots of different languages, containers, etc.
Existing buildpacks are a great place to start when creating a new buildpack from scratch.
Simple buildpack API uses three main scripts
Written in bash shell scripts or Ruby
What do we need to find to show that we are using a particular language?
Java intentionally left off this list because the CF Java buildpack has more complex detection criteria
If app is deployed just with CF push, iterate over all admin and system build packs called detect scripts until one of them strikes gold
If you specify a build pack as part of your cf push command, no need to call the detect script - you’ve told PCF what build pack to use
This is where most of the buildpack work is done - the compile script pulls all the needed component together - give more package manager examples - Python PIP, Perl CPAN
Installing the app may mean copying or moving the app files from the location DEA put them in, or symlinking them to a different location
External source can include custom packages in S3 buckets or public mirrors for public CF, or internal download locations for private CF.
Again, lots of flexibility while maintaining control of the platform
The release script provides feedback metadata back to Cloud Foundry indicating how the application should be executed.
Java buildpack is a bit more complex because it provides a lot more ‘stuff’
Java buildpack gives us three main things - containers, frameworks, and the JREs
Frameworks - loose definition, basically anything that is orthogonal to the container
And examples of each of those three components
Detection is typically based on file existence and content of files, also makes it easy to modify and extend
This list is likely to grow to support other containers like Spring XD.
As with containers, detection is typically based on file existence and content of files.
Can also be done based on services bound to an app, using app bindings in yaml metadata file uploaded alongside code.
Middle part is of interest - where the build pack gets assembled
Get the runtime
Then the framework (auto reconfig part of the Spring and Play Framework auto reconfig)
Then the container
Use the cf files command to query what all the parts are that have been assembled to support your running app
The buildpack-installed locations are decisions made by the Java buildpack - other buildpacks will put things in other places.
Can also use the cf files command to query the created yaml file with all the discovered info used to stage the app
Useful for tracking down specific component versions used in an app
Awesome documentation is in the github repo, well worth a read
Can configure/customise the already included components in the default java build pack
Or you can fork it and extend it by adding in your own components or customisations
version and repository_root are common to all components that download artifacts
other options are specific to the component
1. consult <component-name>.yml (openjdk.yml, first line of lower window)
2. read repository_root/index.yml (first line of upper window)
3. look up the specified version in the index (version in red in lower window, first arrow)
4. download the matching artifact (highlighted in red in upper window, first arrow points to this)
One class is sufficient for most extensions - eg. a .rb in the framework directory to add a staging timestamp - example we’ll use in the demo
Existing support classes serve as good examples and easy to examine the files to see how things have been implemented
if detect returns nil, compile and release will not be called
only one container support class can return non-nil from detect
After you’ve added in your support classes, add them to the main components yaml config file
The buildpack will iterate over all components, applying them as necessary
Java buildpack is probably the best example to use when trying to understand how to customise or extend a build pack - most complete documentation and examples
Even legacy and existing services can also be exposed via a Managed Service
Example: multiple applications connect to the same legacy oracle database and therefore just need connection info and credentials
Managed services are implemented via service brokers, which allow greater control over remote services
Shows the process of creating a service and then binding to it
An example using the MySQL Service Broker - different plans with different levels of service
Note how create-service maps to provisioning an instance on the relevant plan, delete-service removes instance
In both ‘service instance creation’ and ‘service binding creation’, org/space/plan identifiers are provided to the ‘service broker’
This allows the broker flexibility what to do - for example, different options available to users/developers depending on which org/space or plans have been selected
Need to register the service broker with the cloud controller - eg. via Ops Manager
Then make specific plans available to spaces - this can be as coarse or fine-grained as needed
Gives a high degree of control over access to and consumption of external services, gives existing teams the ‘warm fuzzies’ when integrating PCF into existing infrastructure
CF just needs the service broker API to be in place - what happens behind the scenes, how you interface with a service or present it to PCF, is up to you
Examples of how a ‘service instance’ could be provisioned behind the Service Broker. These could be managed - for example - on a space or plan basis
app-1 and app-2 have different credentials
All of this means you have a great degree of flexibility and control over how you deploy a service to PCF