GitHub – the world’s largest developer platform – is a barometer of what’s happening in the world of web development. Its recently released 2019 The State of the Octoverse report offers insight into how developers are using the GitHub platform and what trends have emerged across web development over the last year.
The report reveals that 10 million new developers joined the GitHub community in the last year, taking its total membership to over 40 million. Another 44 million code repositories were created on the platform, and year-on-year more contributors (who are making plenty of open source projects) are coming from outside the US.
The most popular languages of the year also get a mention. Topping the charts – and number one for the sixth year in a row – is the old developer favourite, JavaScript. Getting to grips with vanilla JS is not always easy, but there are plenty of great JavaScript APIs out there that can help kickstart any project.
Charging up into the second place is Python, a general purpose, high-level language for developing desktop apps, websites and web apps. This is used by a lot of big name brands such as Facebook, Instagram, Spotify and Netflix, so it’s not really surprising.
The report also reveals that schools are a key player in bringing on the next generation of developers and GitHub is playing its part. Thirty-one thousand teachers have used GitHub in their courses to teach real-world developer workflows, with 1.7 million students having learnt to code on GitHub – an impressive increase of 55 per cent on the previous year. This is probably driven by the free GitHub Student Developer Pack.
To get more insight check out the complete report here.
Learn to build better JavaScript at the generateJS conference (Image credit: Future / Toa Heftiba, Unsplash)
Join us in April 2020 with our lineup of JavaScript superstars at GenerateJS – the conference helping you build better JavaScript. Book now at generateconf.com
GitHub Sponsors is now out of beta and generally available to developers with bank accounts in 30 countries and growing.
Since GitHub Sponsors launched, the beta has grown exponentially, reaching tens of thousands of developers in the GitHub community. It’s been amazing to see what open source developers have already done with sponsorships in just a few months.
Hear just a few of the talented developers who participated in the beta describe what GitHub Sponsors means to them:
Our small team was busy this summer. We’ve spoken to hundreds of open source developers and sponsors, merged hundreds of pull requests, and added many of the key features that our patient beta users requested—and there’s still so much more to do!
Next steps
This is just the beginning for native sponsorships on GitHub. We’re working hard to build out great sponsorship experiences around the world.
If you don’t have a bank account in one of the 30 countries where GitHub Sponsors is generally available, you can still sign up on the waitlist to join the beta in your country. You’ll also receive news about when GitHub Sponsors is generally available.
In the coming months, we’ll continue to roll out general availability to countries where GitHub does business, incorporate feedback, and improve the tools you need to connect with the community you depend on and collaborate with.
At GitHub, we believe in empowering developers around the world. We also believe in basic human rights and treating people with respect and dignity. Today, I wanted to share a message I sent to all employees yesterday that is related to this, as it is important that we share our views on immigration policy with the world and not just internally with employees.
Hubbers,
In August, the GitHub leadership team learned about a pending renewal of our product by the U.S. Immigration & Customs Enforcement (ICE) agency. Since then, we have been talking with people throughout the company, based on our own personal concerns and those raised by Hubbers. This topic is important to me and the rest of the GitHub leadership team. I wanted to connect with you directly and share details on what we know about this purchase, and the principles by which we make decisions in these areas.
In April 2016, the Immigration & Customs Enforcement (ICE) agency began the process to purchase a license of GitHub Enterprise Server. Both the original purchase, as well as the recent renewal, were made through one of our reseller partners. The revenue from the purchase is less than $200,000 and not financially material for our company.
The license that was purchased is for GitHub Enterprise Server, which is our on-premises product. GitHub does not have a professional services agreement with ICE, and GitHub is not consulting with ICE on any of their projects or initiatives. GitHub has no visibility into how this software is being used, other than presumably for software development and version control.
Like many Hubbers, I strongly disagree with many of the current administration’s immigration policies, including the practice of separating families at the border, the Muslim travel ban, and the efforts to dismantle the DACA program that protects people brought to the U.S. as children without documentation. The leadership team shares these views. These policies run counter to our values as a company, and to our ethics as people. We have spoken out as a company against these practices, and joined with other companies in protesting them. You can read our public statements in the Keep the Families Together Act letter, the Muslim travel ban amicus briefs, and the DACA business leaders letter of support. We also joined an amicus brief last year to protect Sanctuary Cities.
Our parent company, Microsoft, has also publicly opposed these same policies. Microsoft is the sole business that is a direct plaintiff in the litigation that will be heard by the United States Supreme Court next month challenging the rescission of the DACA program. Microsoft has a long history of advocating for migrants and immigration law reform. Microsoft CEO Satya Nadella has spoken passionately about his own experience as an immigrant to the United States, and how Microsoft has consistently stood up for immigration policies that preserve every person’s dignity and human rights, and advocated for change.
We should be proud of all these steps GitHub and Microsoft have taken.
ICE is a large government agency with more than 20,000 employees that is responsible for many things. While ICE does manage immigration law enforcement, including the policies that both GitHub and Microsoft are on record strongly opposing, they are also on the front lines of fighting human trafficking, child exploitation, terrorism and transnational crime, gang violence, money laundering, intellectual property theft, and cybercrime.
In approaching the topic of government purchases, we use the same overarching policy framework as Microsoft:
First, we recognize that ICE is responsible for both enforcing the US immigration policies with which we passionately disagree, as well as policies that are critical to our society, such as fighting human trafficking. We do not know the specific projects that the on-premises GitHub Enterprise Server license is being used with, but recognize it could be used in projects that support policies we both agree and disagree with.
Second, we do have license terms for GitHub Enterprise Server, and also terms of service and acceptable use policies for GitHub.com, and we require customers’ use of GitHub to comply with those terms and with applicable laws. If and when we do discover violations of our terms or of laws, we take action to enforce those terms, and do so in a principled, consistent way. That applies to ICE and any other GitHub users or customers.
Third, we respect the fact that for those of us in the United States, we live in a democratic republic in which the public elects our officials and they decide, pursuant to the rule of law, the policies the government will pursue. Tech companies, in contrast, are not elected by the public. But we have a corporate voice, and we can use our voice and our resources to seek changes in the policies that we oppose. As a matter of principle, we believe the appropriate way to advocate for our values in a democracy is to use our corporate voice, and not to unplug technology services when government customers use them to do things to which we object.
Fourth, we believe that this principled approach will also be impactful as a matter of pragmatism. Attempting to cancel a purchase will not convince the current administration to alter immigration policy. Other actions, such as public advocacy, supporting lawsuits, meaningful philanthropy, and leveraging the vast resources of Microsoft will have the greatest likelihood of affecting public policy. Our voice is heard better by policymakers when we have a seat at the table.
Finally, we think it’s important to keep some distinctions in mind. For example, direct consulting and professional services fall into one category of work by our employees. The use of our tools by software developers for their own private work is something different. We want software developers around the world to have the freedom to operate with a level of privacy. A world where developers in one country or every country are required to tell us what type of software they are creating would, in our view, undermine the fundamental rights of software developers. Just as Microsoft for more than three decades has licensed Microsoft Word without demanding to know what customers use it to write, we believe it would be wrong for GitHub to demand that software developers tell us what they are using our tools to do.
As a result, the GitHub leadership team intends to take the following steps:
We will continue to participate in policy and advocacy efforts to change the current administration’s terrible immigration policies like the family separation policy and the effort to rescind DACA. We will continue to use our voice to raise awareness in defense of those groups impacted by these policies. We will leverage Microsoft’s Legislative & Government Affairs teams to amplify the impact we can have on these issues.
We will donate $500,000—in excess of the value of the purchase by ICE—to nonprofit organizations working to support immigrant communities targeted by the current administration. The Social Impact team will get Hubber input on which nonprofits can provide the most impact for migrants, to be supported via this donation. In the past, we’ve supported organizations like Border Angels, Families for Freedom, and Movement on the Ground.
We are proud of the work Hubbers have been doing to support immigrants and we will continue to encourage and support employees who want to donate their time or other resources to causes in support of humane immigration policies and migrant rights. For example, our Legal team is starting to work with Kids In Need of Defense (KIND), to provide pro-bono legal services to support unaccompanied children who are appearing in immigration court. KIND also needs translation services, and so this could be an opportunity for others with language skills across the company. We are sure there are many other non-profits worthy of support that Hubbers can bring to our attention as well.
We have been discussing this issue with people throughout the company over the past several weeks. We are also aware that some employees are now working on a letter related to this. If there are ways in which GitHub can do more with our policy and philanthropy, or other ways we can leverage GitHub resources for change, we want to hear from you. For those interested in hearing more about this, we will be hosting a Q&A with the LT on this topic.
As software becomes more important in the world, we will continue to face increasingly challenging political and social questions. Even with careful thought, we will sometimes make mistakes. My hope is that we can be an organization that works hard to make principle-based decisions, that regularly reflects on and remains willing to refine its principles, and that recognizes the inevitability of interpersonal disagreement around those principles and challenges that constructively. It’s incumbent on all of us to find ways to cohesively navigate the increasingly turbulent times we find ourselves in.
Nat and the GitHub Leadership Team
Join us at GitHub Universe
Our largest product and community conference is returning to the Palace of Fine Arts in San Francisco, November 13-14. Hear what’s next for the GitHub platform, find inspiration for your next project, and connect with developers who are changing the world.
GitHub Actions makes it easier to automate how you build, test, and deploy your projects on any platform, including Linux, macOS, and Windows. Try out the beta before GitHub Actions is generally available on November 13.
Most static site generators make installation easy, but each project still requires configuration after installation. When you build a lot of similar projects, you may duplicate effort during the setup phase. GitHub template repositories may save you a lot of time if you find yourself:
creating the same folder structures from previous projects,
copying and pasting config files from previous projects, and
copying and pasting boilerplate code from previous projects.
Unlike forking a repository, which allows you to use someone else’s code as a starting point, template repositories allow you to use your own code as a starting point, where each new project gets its own, independent Git history. Check it out!
Let’s take a look at how we can set up a convenient workflow. We’ll set up a boilerplate Eleventy project, turn it into a Git repository, host the repository on GitHub, and then configure that repository to be a template. Then, next time you have a static site project, you’ll be able to come back to the repository, click a button, and start working from an exact copy of your boilerplate.
Are you ready to try it out? Let’s set up our own static site using GitHub templates to see just how much templates can help streamline a static site project.
I’m using Eleventy as an example of a static site generator because it’s my personal go-to, but this process will work for Hugo, Jekyll, Nuxt, or any other flavor of static site generator you prefer.
We’re going to kick things off by running each of these in the command line:
cd ~
mkdir static-site-template
cd static-site-template
These three commands change directory into your home directory (~ in Unix-based systems), make a new directory called static-site-template, and then change directory into the static-site-template directory.
Next, we’ll initialize the Node project
In order to work with Eleventy, we need to install Node.js which allows your computer to run JavaScript code outside of a web browser.
Node.js comes with node package manager, or npm, which downloads node packages to your computer. Eleventy is a node package, so we can use npm to fetch it.
Assuming Node.js is installed, let’s head back to the command line and run:
npm init
This creates a file called package.json in the directory. npm will prompt you for a series of questions to fill out the metadata in your package.json. After answering the questions, the Node.js project is initialized.
Now we can install Eleventy
Initializing the project gave us a package.json file which lets npm install packages, run scripts, and do other tasks for us inside that project. npm uses package.json as an entry point in the project to figure out precisely how and what it should do when we give it commands.
We can tell npm to install Eleventy as a development dependency by running:
npm install -D @11ty/eleventy
This will add a devDependency entry to the package.json file and install the Eleventy package to a node_modules folder in the project.
The cool thing about the package.json file is that any other computer with Node.js and npm can read it and know to install Eleventy in the project node_modules directory without having to install it manually. See, we’re already streamlining things!
Configuring Eleventy
There are tons of ways to configure an Eleventy project. Flexibility is Eleventy’s strength. For the purposes of this tutorial, I’m going to demonstrate a configuration that provides:
A folder to cleanly separate website source code from overall project files
An HTML document for a single page website
CSS to style the document
JavaScript to add functionality to the document
Hop back in the command line. Inside the static-site-template folder, run these commands one by one (excluding the comments that appear after each # symbol):
mkdir src # creates a directory for your website source code
mkdir src/css # creates a directory for the website styles
mkdir src/js # creates a directory for the website JavaScript
touch index.html # creates the website HTML document
touch css/style.css # creates the website styles
touch js/main.js # creates the website JavaScript
This creates the basic file structure that will inform the Eleventy build. However, if we run Eleventy right now, it won’t generate the website we want. We still have to configure Eleventy to understand that it should only use files in the src folder for building, and that the css and js folders should be processed with passthrough file copy.
You can give this information to Eleventy through a file called .eleventy.js in the root of the static-site-template folder. You can create that file by running this command inside the static-site-template folder:
touch .eleventy.js
Edit the file in your favorite text editor so that it contains this:
Lines 2 and 3 tell Eleventy to use passthrough file copy for CSS and JavaScript. Line 6 tells Eleventy to use only the src directory to build its output.
Eleventy will now give us the expected output we want. Let’s put that to the test by putting this In the command line:
npx @11ty/eleventy
The npx command allows npm to execute code from the project node_module directory without touching the global environment. You’ll see output like this:
Writing _site/index.html from ./src/index.html.
Copied 2 items and Processed 1 file in 0.04 seconds (v0.9.0)
The static-site-template folder should now have a new directory in it called _site. If you dig into that folder, you’ll find the css and js directories, along with the index.html file.
This _site folder is the final output from Eleventy. It is the entirety of the website, and you can host it on any static web host.
Without any content, styles, or scripts, the generated site isn’t very interesting:
Let’s create a boilerplate website
Next up, we’re going to put together the baseline for a super simple website we can use as the starting point for all projects moving forward.
It’s worth mentioning that Eleventy has a ton of boilerplate files for different types of projects. It’s totally fine to go with one of these though I often find I wind up needing to roll my own. So that’s what we’re doing here.
Static site template
Great job making your website template!
We may as well style things a tiny bit, so let’s add this to src/css/style.css:
body {
font-family: sans-serif;
}
And we can confirm JavaScript is hooked up by adding this to src/js/main.js:
(function() {
console.log('Invoke the static site template JavaScript!');
})();
Want to see what we’ve got? Run npx @11ty/eleventy --serve in the command line. Eleventy will spin up a server with Browsersync and provide the local URL, which is probably something like localhost:8080.
Even the console tells us things are ready to go!
Let’s move this over to a GitHub repo
Git is the most commonly used version control system in software development. Most Unix-based computers come with it installed, and you can turn any directory into a Git repository by running this command:
git init
We should get a message like this:
Initialized empty Git repository in /path/to/static-site-template/.git/
That means a hidden .git folder was added inside the project directory, which allows the Git program to run commands against the project.
Before we start running a bunch of Git commands on the project, we need to tell Git about files we don’t want it to touch.
Inside the static-site-template directory, run:
touch .gitignore
Then open up that file in your favorite text editor. Add this content to the file:
_site/
node_modules/
This tells Git to ignore the node_modules directory and the _site directory. Committing every single Node.js module to the repo could make things really messy and tough to manage. All that information is already in package.json anyway.
Similarly, there’s no need to version control _site. Eleventy can generate it from the files in src, so no need to take up space in GitHub. It’s also possible that if we were to:
version control _site,
change files in src, or
forget to run Eleventy again,
then _site will reflect an older build of the website, and future developers (or a future version of yourself) may accidentally use an outdated version of the site.
Git is version control software, and GitHub is a Git repository host. There are other Git host providers like BitBucket or GitLab, but since we’re talking about a GitHub-specific feature (template repositories), we’ll push our work up to GitHub. If you don’t already have an account, go ahead and join GitHub. Once you have an account, create a GitHub repository and name it static-site-template.
GitHub will ask a few questions when setting up a new repository. One of those is whether we want to create a new repository on the command line or push an existing repository from the command line. Neither of these choices are exactly what we need. They assume we either don’t have anything at all, or we have been using Git locally already. The static-site-template project already exists, has a Git repository initialized, but doesn’t yet have any commits on it.
So let’s ignore the prompts and instead run the following commands in the command line. Make sure to have the URL GitHub provides in the command from line 3 handy:
This adds the entire static-site-template folder to the Git staging area. It commits it with the message “first commit,” adds a remote repository (the GitHub repository), and then pushes up the master branch to that repository.
Let’s template-ize this thing
OK, this is the crux of what we have been working toward. GitHub templates allows us to use the repository we’ve just created as the foundation for other projects in the future — without having to do all the work we’ve done to get here!
Click Settings on the GitHub landing page of the repository to get started. On the settings page, check the button for Template repository.
Now when we go back to the repository page, we’ll get a big green button that says Use this template. Click it and GitHub will create a new repository that’s a mirror of our new template. The new repository will start with the same files and folders as static-site-template. From there, download or clone that new repository to start a new project with all the base files and configuration we set up in the template project.
We can extend the template for future projects
Now that we have a template repository, we can use it for any new static site project that comes up. However, You may find that a new project has additional needs than what’s been set up in the template. For example, let’s say you need to tap into Eleventy’s templating engine or data processing power.
Go ahead and build on top of the template as you work on the new project. When you finish that project, identify pieces you want to reuse in future projects. Perhaps you figured out a cool hover effect on buttons. Or you built your own JavaScript carousel element. Or maybe you’re really proud of the document design and hierarchy of information.
If you think anything you did on a project might come up again on your next run, remove the project-specific details and add the new stuff to your template project. Push those changes up to GitHub, and the next time you use static-site-template to kick off a project, your reusable code will be available to you.
There are some limitations to this, of course
GitHub template repositories are a useful tool for avoiding repetitive setup on new web development projects. I find this especially useful for static site projects. These template repositories might not be as appropriate for more complex projects that require external services like databases with configuration that cannot be version-controlled in a single directory.
Template repositories allow you to ship reusable code you have written so you can solve a problem once and use that solution over and over again. But while your new solutions will carry over to future projects, they won’t be ported backwards to old projects.
This is a useful process for sites with very similar structure, styles, and functionality. Projects with wildly varied requirements may not benefit from this code-sharing, and you could end up bloating your project with unnecessary code.
Wrapping up
There you have it! You now have everything you need to not only start a static site project using Eleventy, but the power to re-purpose it on future projects. GitHub templates are so handy for kicking off projects quickly where we otherwise would have to re-build the same wheel over and over. Use them to your advantage and enjoy a jump start on your projects moving forward!
This doc focuses on the example graph
that performs hand tracking with TensorFlow Lite on GPU. It is related to the hand detection example, and we recommend users
to review the hand detection example first.
For overall context on hand detection and hand tracking, please read this Google AI Blog post.
In the visualization above, the red dots represent the localized hand landmarks,
and the green lines are simply connections between selected landmark pairs for
visualization of the hand skeleton. The red box represents a hand rectangle that
covers the entire hand, derived either from hand detection (see hand detection example) or from the pervious
round of hand landmark localization using an ML model (see also model card). Hand landmark localization is
performed only within the hand rectangle for computational efficiency and
accuracy, and hand detection is only invoked when landmark localization could
not identify hand presence in the previous iteration.
The example can also run in a mode that localizes hand landmarks in 3D (i.e.,
estimating an extra z coordinate):
In the visualization above, the localized hand landmarks are represented by dots
in different shades, with the brighter ones denoting landmarks closer to the
camera.
The subgraphs show up in the main graph visualization as nodes colored in
purple, and the subgraph itself can also be visualized just like a regular
graph. For more information on how to visualize a graph that includes subgraphs,
see the Visualizing Subgraphs section in the visualizer documentation.
# MediaPipe graph that performs hand tracking with TensorFlow Lite on GPU.# Used in the examples in# mediapipie/examples/android/src/java/com/mediapipe/apps/handtrackinggpu and# mediapipie/examples/ios/handtrackinggpu.# Images coming into and out of the graph.
input_stream: "input_video"
output_stream: "output_video"# Throttles the images flowing downstream for flow control. It passes through# the very first incoming image unaltered, and waits for downstream nodes# (calculators and subgraphs) in the graph to finish their tasks before it# passes through another image. All images that come in while waiting are# dropped, limiting the number of in-flight images in most part of the graph to# 1. This prevents the downstream nodes from queuing up incoming images and data# excessively, which leads to increased latency and memory usage, unwanted in# real-time mobile applications. It also eliminates unnecessarily computation,# e.g., the output produced by a node may get dropped downstream if the# subsequent nodes are still busy processing previous inputs.
node {
calculator: "FlowLimiterCalculator"
input_stream: "input_video"
input_stream: "FINISHED:hand_rect"
input_stream_info: {
tag_index: "FINISHED"
back_edge: true
}
output_stream: "throttled_input_video"
}
# Caches a hand-presence decision fed back from HandLandmarkSubgraph, and upon# the arrival of the next input image sends out the cached decision with the# timestamp replaced by that of the input image, essentially generating a packet# that carries the previous hand-presence decision. Note that upon the arrival# of the very first input image, an empty packet is sent out to jump start the# feedback loop.
node {
calculator: "PreviousLoopbackCalculator"
input_stream: "MAIN:throttled_input_video"
input_stream: "LOOP:hand_presence"
input_stream_info: {
tag_index: "LOOP"
back_edge: true
}
output_stream: "PREV_LOOP:prev_hand_presence"
}
# Drops the incoming image if HandLandmarkSubgraph was able to identify hand# presence in the previous image. Otherwise, passes the incoming image through# to trigger a new round of hand detection in HandDetectionSubgraph.
node {
calculator: "GateCalculator"
input_stream: "throttled_input_video"
input_stream: "DISALLOW:prev_hand_presence"
output_stream: "hand_detection_input_video"
node_options: {
[type.googleapis.com/mediapipe.GateCalculatorOptions] {
empty_packets_as_allow: true
}
}
}
# Subgraph that detections hands (see hand_detection_gpu.pbtxt).
node {
calculator: "HandDetectionSubgraph"
input_stream: "hand_detection_input_video"
output_stream: "DETECTIONS:palm_detections"
output_stream: "NORM_RECT:hand_rect_from_palm_detections"
}
# Subgraph that localizes hand landmarks (see hand_landmark_gpu.pbtxt).
node {
calculator: "HandLandmarkSubgraph"
input_stream: "IMAGE:throttled_input_video"
input_stream: "NORM_RECT:hand_rect"
output_stream: "LANDMARKS:hand_landmarks"
output_stream: "NORM_RECT:hand_rect_from_landmarks"
output_stream: "PRESENCE:hand_presence"
}
# Caches a hand rectangle fed back from HandLandmarkSubgraph, and upon the# arrival of the next input image sends out the cached rectangle with the# timestamp replaced by that of the input image, essentially generating a packet# that carries the previous hand rectangle. Note that upon the arrival of the# very first input image, an empty packet is sent out to jump start the# feedback loop.
node {
calculator: "PreviousLoopbackCalculator"
input_stream: "MAIN:throttled_input_video"
input_stream: "LOOP:hand_rect_from_landmarks"
input_stream_info: {
tag_index: "LOOP"
back_edge: true
}
output_stream: "PREV_LOOP:prev_hand_rect_from_landmarks"
}
# Merges a stream of hand rectangles generated by HandDetectionSubgraph and that# generated by HandLandmarkSubgraph into a single output stream by selecting# between one of the two streams. The formal is selected if the incoming packet# is not empty, i.e., hand detection is performed on the current image by# HandDetectionSubgraph (because HandLandmarkSubgraph could not identify hand# presence in the previous image). Otherwise, the latter is selected, which is# never empty because HandLandmarkSubgraphs processes all images (that went# through FlowLimiterCaculator).
node {
calculator: "MergeCalculator"
input_stream: "hand_rect_from_palm_detections"
input_stream: "prev_hand_rect_from_landmarks"
output_stream: "hand_rect"
}
# Subgraph that renders annotations and overlays them on top of the input# images (see renderer_gpu.pbtxt).
node {
calculator: "RendererSubgraph"
input_stream: "IMAGE:throttled_input_video"
input_stream: "LANDMARKS:hand_landmarks"
input_stream: "NORM_RECT:hand_rect"
input_stream: "DETECTIONS:palm_detections"
output_stream: "IMAGE:output_video"
}
# MediaPipe hand detection subgraph.
type: "HandDetectionSubgraph"
input_stream: "input_video"
output_stream: "DETECTIONS:palm_detections"
output_stream: "NORM_RECT:hand_rect_from_palm_detections"# Transforms the input image on GPU to a 256x256 image. To scale the input# image, the scale_mode option is set to FIT to preserve the aspect ratio,# resulting in potential letterboxing in the transformed image.
node: {
calculator: "ImageTransformationCalculator"
input_stream: "IMAGE_GPU:input_video"
output_stream: "IMAGE_GPU:transformed_input_video"
output_stream: "LETTERBOX_PADDING:letterbox_padding"
node_options: {
[type.googleapis.com/mediapipe.ImageTransformationCalculatorOptions] {
output_width: 256
output_height: 256
scale_mode: FIT
}
}
}
# Generates a single side packet containing a TensorFlow Lite op resolver that# supports custom ops needed by the model used in this graph.
node {
calculator: "TfLiteCustomOpResolverCalculator"
output_side_packet: "opresolver"
node_options: {
[type.googleapis.com/mediapipe.TfLiteCustomOpResolverCalculatorOptions] {
use_gpu: true
}
}
}
# Converts the transformed input image on GPU into an image tensor stored as a# TfLiteTensor.
node {
calculator: "TfLiteConverterCalculator"
input_stream: "IMAGE_GPU:transformed_input_video"
output_stream: "TENSORS_GPU:image_tensor"
}
# Runs a TensorFlow Lite model on GPU that takes an image tensor and outputs a# vector of tensors representing, for instance, detection boxes/keypoints and# scores.
node {
calculator: "TfLiteInferenceCalculator"
input_stream: "TENSORS_GPU:image_tensor"
output_stream: "TENSORS:detection_tensors"
input_side_packet: "CUSTOM_OP_RESOLVER:opresolver"
node_options: {
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
model_path: "palm_detection.tflite"
use_gpu: true
}
}
}
# Generates a single side packet containing a vector of SSD anchors based on# the specification in the options.
node {
calculator: "SsdAnchorsCalculator"
output_side_packet: "anchors"
node_options: {
[type.googleapis.com/mediapipe.SsdAnchorsCalculatorOptions] {
num_layers: 5
min_scale: 0.1171875
max_scale: 0.75
input_size_height: 256
input_size_width: 256
anchor_offset_x: 0.5
anchor_offset_y: 0.5
strides: 8
strides: 16
strides: 32
strides: 32
strides: 32
aspect_ratios: 1.0
fixed_anchor_size: true
}
}
}
# Decodes the detection tensors generated by the TensorFlow Lite model, based on# the SSD anchors and the specification in the options, into a vector of# detections. Each detection describes a detected object.
node {
calculator: "TfLiteTensorsToDetectionsCalculator"
input_stream: "TENSORS:detection_tensors"
input_side_packet: "ANCHORS:anchors"
output_stream: "DETECTIONS:detections"
node_options: {
[type.googleapis.com/mediapipe.TfLiteTensorsToDetectionsCalculatorOptions] {
num_classes: 1
num_boxes: 2944
num_coords: 18
box_coord_offset: 0
keypoint_coord_offset: 4
num_keypoints: 7
num_values_per_keypoint: 2
sigmoid_score: true
score_clipping_thresh: 100.0
reverse_output_order: true
x_scale: 256.0
y_scale: 256.0
h_scale: 256.0
w_scale: 256.0
min_score_thresh: 0.7
}
}
}
# Performs non-max suppression to remove excessive detections.
node {
calculator: "NonMaxSuppressionCalculator"
input_stream: "detections"
output_stream: "filtered_detections"
node_options: {
[type.googleapis.com/mediapipe.NonMaxSuppressionCalculatorOptions] {
min_suppression_threshold: 0.3
overlap_type: INTERSECTION_OVER_UNION
algorithm: WEIGHTED
return_empty_detections: true
}
}
}
# Maps detection label IDs to the corresponding label text ("Palm"). The label# map is provided in the label_map_path option.
node {
calculator: "DetectionLabelIdToTextCalculator"
input_stream: "filtered_detections"
output_stream: "labeled_detections"
node_options: {
[type.googleapis.com/mediapipe.DetectionLabelIdToTextCalculatorOptions] {
label_map_path: "palm_detection_labelmap.txt"
}
}
}
# Adjusts detection locations (already normalized to [0.f, 1.f]) on the# letterboxed image (after image transformation with the FIT scale mode) to the# corresponding locations on the same image with the letterbox removed (the# input image to the graph before image transformation).
node {
calculator: "DetectionLetterboxRemovalCalculator"
input_stream: "DETECTIONS:labeled_detections"
input_stream: "LETTERBOX_PADDING:letterbox_padding"
output_stream: "DETECTIONS:palm_detections"
}
# Extracts image size from the input images.
node {
calculator: "ImagePropertiesCalculator"
input_stream: "IMAGE_GPU:input_video"
output_stream: "SIZE:image_size"
}
# Converts results of palm detection into a rectangle (normalized by image size)# that encloses the palm and is rotated such that the line connecting center of# the wrist and MCP of the middle finger is aligned with the Y-axis of the# rectangle.
node {
calculator: "DetectionsToRectsCalculator"
input_stream: "DETECTIONS:palm_detections"
input_stream: "IMAGE_SIZE:image_size"
output_stream: "NORM_RECT:palm_rect"
node_options: {
[type.googleapis.com/mediapipe.DetectionsToRectsCalculatorOptions] {
rotation_vector_start_keypoint_index: 0 # Center of wrist.
rotation_vector_end_keypoint_index: 2 # MCP of middle finger.
rotation_vector_target_angle_degrees: 90
output_zero_rect_for_empty_detections: true
}
}
}
# Expands and shifts the rectangle that contains the palm so that it's likely# to cover the entire hand.
node {
calculator: "RectTransformationCalculator"
input_stream: "NORM_RECT:palm_rect"
input_stream: "IMAGE_SIZE:image_size"
output_stream: "hand_rect_from_palm_detections"
node_options: {
[type.googleapis.com/mediapipe.RectTransformationCalculatorOptions] {
scale_x: 2.6
scale_y: 2.6
shift_y: -0.5
square_long: true
}
}
}
# MediaPipe hand landmark localization subgraph.
type: "HandLandmarkSubgraph"
input_stream: "IMAGE:input_video"
input_stream: "NORM_RECT:hand_rect"
output_stream: "LANDMARKS:hand_landmarks"
output_stream: "NORM_RECT:hand_rect_for_next_frame"
output_stream: "PRESENCE:hand_presence"# Crops the rectangle that contains a hand from the input image.
node {
calculator: "ImageCroppingCalculator"
input_stream: "IMAGE_GPU:input_video"
input_stream: "NORM_RECT:hand_rect"
output_stream: "IMAGE_GPU:hand_image"
}
# Transforms the input image on GPU to a 256x256 image. To scale the input# image, the scale_mode option is set to FIT to preserve the aspect ratio,# resulting in potential letterboxing in the transformed image.
node: {
calculator: "ImageTransformationCalculator"
input_stream: "IMAGE_GPU:hand_image"
output_stream: "IMAGE_GPU:transformed_hand_image"
output_stream: "LETTERBOX_PADDING:letterbox_padding"
node_options: {
[type.googleapis.com/mediapipe.ImageTransformationCalculatorOptions] {
output_width: 256
output_height: 256
scale_mode: FIT
}
}
}
# Converts the transformed input image on GPU into an image tensor stored as a# TfLiteTensor.
node {
calculator: "TfLiteConverterCalculator"
input_stream: "IMAGE_GPU:transformed_hand_image"
output_stream: "TENSORS_GPU:image_tensor"
}
# Runs a TensorFlow Lite model on GPU that takes an image tensor and outputs a# vector of tensors representing, for instance, detection boxes/keypoints and# scores.
node {
calculator: "TfLiteInferenceCalculator"
input_stream: "TENSORS_GPU:image_tensor"
output_stream: "TENSORS:output_tensors"
node_options: {
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
model_path: "hand_landmark.tflite"
use_gpu: true
}
}
}
# Splits a vector of tensors into multiple vectors.
node {
calculator: "SplitTfLiteTensorVectorCalculator"
input_stream: "output_tensors"
output_stream: "landmark_tensors"
output_stream: "hand_flag_tensor"
node_options: {
[type.googleapis.com/mediapipe.SplitVectorCalculatorOptions] {
ranges: { begin: 0 end: 1 }
ranges: { begin: 1 end: 2 }
}
}
}
# Converts the hand-flag tensor into a float that represents the confidence# score of hand presence.
node {
calculator: "TfLiteTensorsToFloatsCalculator"
input_stream: "TENSORS:hand_flag_tensor"
output_stream: "FLOAT:hand_presence_score"
}
# Applies a threshold to the confidence score to determine whether a hand is# present.
node {
calculator: "ThresholdingCalculator"
input_stream: "FLOAT:hand_presence_score"
output_stream: "FLAG:hand_presence"
node_options: {
[type.googleapis.com/mediapipe.ThresholdingCalculatorOptions] {
threshold: 0.1
}
}
}
# Decodes the landmark tensors into a vector of lanmarks, where the landmark# coordinates are normalized by the size of the input image to the model.
node {
calculator: "TfLiteTensorsToLandmarksCalculator"
input_stream: "TENSORS:landmark_tensors"
output_stream: "NORM_LANDMARKS:landmarks"
node_options: {
[type.googleapis.com/mediapipe.TfLiteTensorsToLandmarksCalculatorOptions] {
num_landmarks: 21
input_image_width: 256
input_image_height: 256
}
}
}
# Adjusts landmarks (already normalized to [0.f, 1.f]) on the letterboxed hand# image (after image transformation with the FIT scale mode) to the# corresponding locations on the same image with the letterbox removed (hand# image before image transformation).
node {
calculator: "LandmarkLetterboxRemovalCalculator"
input_stream: "LANDMARKS:landmarks"
input_stream: "LETTERBOX_PADDING:letterbox_padding"
output_stream: "LANDMARKS:scaled_landmarks"
}
# Projects the landmarks from the cropped hand image to the corresponding# locations on the full image before cropping (input to the graph).
node {
calculator: "LandmarkProjectionCalculator"
input_stream: "NORM_LANDMARKS:scaled_landmarks"
input_stream: "NORM_RECT:hand_rect"
output_stream: "NORM_LANDMARKS:hand_landmarks"
}
# Extracts image size from the input images.
node {
calculator: "ImagePropertiesCalculator"
input_stream: "IMAGE_GPU:input_video"
output_stream: "SIZE:image_size"
}
# Converts hand landmarks to a detection that tightly encloses all landmarks.
node {
calculator: "LandmarksToDetectionCalculator"
input_stream: "NORM_LANDMARKS:hand_landmarks"
output_stream: "DETECTION:hand_detection"
}
# Converts the hand detection into a rectangle (normalized by image size)# that encloses the hand and is rotated such that the line connecting center of# the wrist and MCP of the middle finger is aligned with the Y-axis of the# rectangle.
node {
calculator: "DetectionsToRectsCalculator"
input_stream: "DETECTION:hand_detection"
input_stream: "IMAGE_SIZE:image_size"
output_stream: "NORM_RECT:hand_rect_from_landmarks"
node_options: {
[type.googleapis.com/mediapipe.DetectionsToRectsCalculatorOptions] {
rotation_vector_start_keypoint_index: 0 # Center of wrist.
rotation_vector_end_keypoint_index: 9 # MCP of middle finger.
rotation_vector_target_angle_degrees: 90
}
}
}
# Expands the hand rectangle so that in the next video frame it's likely to# still contain the hand even with some motion.
node {
calculator: "RectTransformationCalculator"
input_stream: "NORM_RECT:hand_rect_from_landmarks"
input_stream: "IMAGE_SIZE:image_size"
output_stream: "hand_rect_for_next_frame"
node_options: {
[type.googleapis.com/mediapipe.RectTransformationCalculatorOptions] {
scale_x: 1.6
scale_y: 1.6
square_long: true
}
}
}
I updated this with the responses from GitHub CEO, so please read the article to the end.
And now please read my second article with much deeper look in issues and questions that GitHub should answer in an official statement.
That’s my profile on GitHub. Did you notice that yellow warning on top?
First, some background: I am a software developer based in Iran and I’m on GitHub since 2012. In January 2019 when they announced that GitHub free includes unlimited private repositories, I completely moved to GitHub.
Everything was fine and I was happy. Although I participated in Hacktoberfest and they failed to send my t-shirt due to “International embargoes” but I thought OK, at least I can use their free services, right? Wrong!
Don’t worry, I provided an alternate address
Now, the real story begins: Today I received an email
Well, they blocked me. But why? I know that I’m a resident of the sanctioned country but I didn’t have any financial transaction with GitHub company. I just used their FREE services like well millions of developers, so I read that GitHub and Trade Controls page, looking for an answer.
Seriously?
Seriously? Do they believe someone could use GitHub to develop or USE nuclear or biological or chemical weapons? How? Really, How? By FREE private repositories? I mean let’s think about it: You think that some government agency would use a private GitHub repository as storage for military secrets?
For a second, let’s suppose that’s true and someone used a FREE service to violate sanction laws or Arms regulations. Do you block all Iranian developers for that? I mean, are youMAD KING?
BLOCK THEM ALL
If you are curious about the effect of GitHub act, take a look at this screenshot from one of my free private repositories
I can’t access the code, issues or PRs or anything!
There is a famous line from George Orwell’s Animal Farm:
All animals are equal, but some animals are more equal than others
I think we can apply this line in this GitHub story. All developers are equal, but some developers are more equal than others and they can have our free services while others would use it to develop nuclear weapons!
Come on GitHub. You are not a mad king so stop the madness: Don’t burn them all.
I asked Nat Friedman (CEO of GitHub) on twitter about this, help me spread the word by retweeting my tweet
After 1K retweet of my first tweet and more than 15K claps here, Nat Friedman and GitHub still ignore me. So I tweeted again 🙂 you can retweet this tweet as well and ask Github to make #githubForEveryone again.
Update 1 (Jul 25): Apparently it’s not just disabling free private repositories. GitHub Pages is blocked too, even for “public” open source repositories!
Screenshot from GitHub Pages for a public repository. I can access the repository itself but I get 404 error for GitHub Pages associated with that public repository
more on this on Update 7 below.
Update 2 (Jul 26): GitHub blocked all Iranian accounts without any prior notice and they don’t give us a chance to download a backup of our data. Here is a screenshot from GitHub support response to a developer who sends a request for backup
more on this in Update 6 below.
Update 3 (Jul 26): GitHub targeting people based on “Nationality” (the whole activity) not the residency or current connection IPs. Here is an example (he is another developer, not me)
I think this is a clear case of discrimination and totally against Open Source values
of course, we are not “evil people” and we deserve freedom in GitHub (largest Open Source community)
here is a response from GitHub CEO (Jul 28)
Update 4 (Jul 27): This public open-source project created by a blocked Iranian developer received more than 2.5K star since yesterday, and it’s still not in Github Trending page. Does it mean if a blocked user, make an awesome open source project, Github is going to ignore it?
Why they don’t show this repo in GitHub Trending page?
more on this. it seems that this is not an issue (Jul 28)
Update 5 (Jul 28): Confirmed. GitHub is quietly rolling back some of the restrictions for blocked users. The one I spot is “Delete this repository” button is not disabled anymore
Screenshot from before & after. as you can see the “Make private” button is still disabled for blocked users
Update 6 (Jul 28): Another update for blocked users. You can make your “private repos” public so you can clone them
They made these changes on July 28
Although people don’t like exporting their code this way (he is another developer not me)
Update 7 (Jul 28): GitHub Pages seems to work now for blocked users. Just change the source in options and you should see that Custom Domain textbox.
They made these changes on July 28
Update 8: They add a “close button” to that annoying warning on top. so a blocked user can close that message and no need to workarounds like this.