29 stories
·
0 followers

The elephant in the diversity room

1 Comment and 2 Shares

Although there’s a lot of heated discussion around diversity, I feel many of us ignore the elephant in the web development diversity room. We tend to forget about users of older or non-standard devices and browsers, instead focusing on people with modern browsers, which nowadays means the latest versions of Chrome and Safari.

This is nothing new — see “works only in IE” ten years ago, or “works only in Chrome” right now — but as long as we’re addressing other diversity issues in web development we should address this one as well.

Ignoring users of older browsers springs from the same causes as ignoring women, or non-whites, or any other disadvantaged group. Average web developer does not know any non-whites, so he ignores them. Average web developer doesn’t know any people with older devices, so he ignores them. Not ignoring them would be more work, and we’re on a tight deadline with a tight budget, the boss didn’t say we have to pay attention to them, etc. etc. The usual excuses.

Besides, let’s be realistic, shall we? The next billion, most of whom are on cheap Android devices without the latest and greatest browsing software, are mostly poor — and mostly black or brown anyway — so they don’t fit in our business model. We only cater to whites — not because we’re racist (of COURSE we aren’t!) but because of ... well, the Others are not our market.

So far, this diversity problem plays out the same as the others. However, there’s one important difference: while other diversity problems in web development could conceivably be solved by non-web developers (by managers, for instance), the old-devices problem can only be solved by us because we’re the only ones who know how to do it.

Besides, taking care of all users is our job. So let’s do our job, shall we?

And let’s start at the start. Let’s admit we have a prejudice against users of old or non-standard devices and browsers, just as we have a prejudice against women and non-whites, and for exactly the same reasons.

Read the whole story
nod
892 days ago
reply
Share this story
Delete
1 public comment
acdha
892 days ago
reply
Interesting way to put this…
Washington, DC
chrishiestand
892 days ago
Yes. I'm not sure that I buy the entire argument but it does make a really good point about users on old devices.

Getting started with variable fonts

1 Share

The following is an unedited extract from my [forthcoming book](http://book.webtypography.net/).

In October 2016, version 1.8 of OpenType was [released](https://medium.com/@tiro/12ba6cd2369), and with it an extensive new technology: OpenType Font Variations. More commonly known as variable fonts, the technology enables a single font file to behave like multiple fonts. This is done by defining variations within the font, which are interpolated along one or more axes. Two of these axes might be width and weight, but the type designer can define many others too.


6 by 6 matrix styles
Gingham variable font with continuous variation along width and weight axes

The preceding image shows a variable font rendered in 36 different styles, all from one file. If you were to pick four styles and serve them as normal fonts, a variable font file capable of providing the same styles would be significantly smaller than the four separate files, with the added speed advantage of requiring just one call to the server.

The illustration varies width and weight. Those two axes alone mean that, according to the OpenType Font Variations specification, theoretically 1000×1000 (one million) variations are possible within the one file with no extra data. A third axis could increase the possibilities to one billion.

At the time of writing the technology is in its infancy, but it potentially opens up tremendous opportunities for new kinds of responsive typography. The file size savings and fine precision means that many small adjustments could be made to the rendered font, potentially responding dynamically to the reader’s device and environment, as well to the text.

Within the design space created by the axes of variation in a font, the type designer can define specific positions as named instances. Each named instance could appear to users of design software as if it were a separate font, for example ‘regular’, ‘light condensed’ or ‘extra bold extended’.

In the OpenType specification, five common axes of variation have been pre-defined as four-character tags: weight `wght`, width `wdth`, italic `ital`, slant `slnt` and optical size `opsz`. These font variations can be enabled by the `font-weight`, `font-stretch`, and `font-style` properties. [CSS4](https://drafts.csswg.org/css-fonts-4/) adds new values for the properties to work with font variations:

  • `font-weight` takes any integer from 1–999 (not limited to multiples of 100 as in CSS3).
  • `font-stretch` takes a percentage number in a range where 100% is normal, 50% is ultra-condensed and 200% is ultra-expanded.
  • `font-style` takes an oblique angle value from `oblique -90deg` to `oblique 90deg`.
  • `font-optical-sizing` is a new property taking a value of `auto` or `none` which turns on optical sizing if it’s available as an axis in the variable font.

6 styles
Continuous variation along an optical sizing axis in Amstelvar

Font designers can also define custom axes with their own four-character tags. This enables designers to vary almost any imaginable aspect of a typeface, such as contrast, x-height, serif-shape, grunginess, and even parts of an individual glyphs, such as the length of the tail on a Q. Using a syntax similar to `font-feature-settings`, custom axes as well as the predefined ones, are available through the low-level `font-variation-settings` property. For example, this would render text with a variation that is very wide, light weight and optically sized for 48pt:

h2 {
font-variation-settings: “wdth” 600, “wght” 200, “opsz” 48;
}

Visit Laurence Penney’s [Axis-Praxis.org](http://Axis-Praxis.org) to play with variations and design instances of some variable fonts (requires [Safari Technology Preview](https://developer.apple.com/safari/technology-preview/)).

As with regular OpenType fonts, variable fonts can be used as web fonts as-is, or preferably wrapped up as a WOFF. If you want to use to a variable font as a web font, in your `@font-face` rule you should set the `format` to `woff-variations` or `ttf-variations`. If you wish to provide regular font fallbacks for browsers which don’t support variable fonts, you can use multiple font-face rules where necessary, repeating the variable font each time.

@font-face {
font-family: 'Nicefont';
src: url('nicefont_var.woff2') format('woff-variations');
src: url('nicefont_regular.woff2') format('woff2');
font-weight: normal;
font-style: normal;
}
@font-face {
font-family: 'Nicefont';
src: url('nicefont_var.woff2') format('woff-variations');
src: url('nicefont_black.woff2') format('woff2');
font-weight: 800;
font-style: normal;
}
At the time of writing there is support for `font-variation-settings` in Webkit Nightlies and [Safari Technology Preview](https://developer.apple.com/safari/technology-preview/), but neither support `font-weight` or other such properties with variable fonts. Additionally the web font `format` needs to be `woff` or `ttf`. Variable fonts were jointly developed by Adobe, Apple, Google and Microsoft. This means support in new versions of browsers should arrive across the board as soon as the precise implementations and CSS specifications are agreed. Current [estimates](http://responsivewebdesign.com/podcast/variable-fonts) have variable fonts being a viable option on the web by early 2018.

Read or add comments

Read the whole story
nod
1022 days ago
reply
Share this story
Delete

delusional is not a skill set

1 Share

delusional-is-not-a-skill-set

It’s great to be delusional. It allows you to try super ambitious stuff that no normal person would touch with a barge pole.

But that on its own is not enough. You also need to the other stuff, the skill set, the ability to actually execute.

Talent. Brains. Ambition. Charm. Business savvy.

All the rock star stuff we admire the most in other people, counts for little without a solid, long-term work ethic.

You know, the boring stuff…

The post delusional is not a skill set appeared first on Gapingvoid.

Read the whole story
nod
1214 days ago
reply
Share this story
Delete

How about we make ES6 the new baseline?

1 Share

Bass strings

During the recording of the web platform’s “Are Web Components ready?” podcast one of the comments stuck with me:

With web components we’re trying to bring ES6-aera technology into an ES5 world. That makes no sense.

There is a lot of interesting logic in that one. Right now, we’re in a bad place with the web. There is a big discussion what we should consider the “modern web” and how to innovate it. And the two sides of the it are at loggerheads:

  • Purists of the web frown upon JavaScript dependency and expect no new feature to break the web. Instead it should build upon what we already have. These are developers wearing battle scars of the browser wars. I count myself amongst them for long time now – I just like things that work and got disappointed by browsers once too often.
  • The more “pragmatic engineering” crowd sees the web as a software platform that needs evolving like every other one does. And one that is falling woefully behind. Native platforms on mobile for example do not worry about breaking existing experiences. It is OK to request the user to have a certain version of an OS to run. The same – in their view – should be OK on the web.

Both are correct, and both are wrong. And I am sick of it. Instead of trying to fix the situation, we bicker over ideas and methodologies. We christen new development approaches with grandiose names. Then we argue for days on end what these mean. We talk about “real world use” without looking at numbers not skewed in favor of certain solutions. And while all that happens, we’re losing the web.

I’m not buying the tales of woe that all users prefer native to the web now. That’s a short-sighted view backed up by sales numbers in affluent countries and our own circles.There is a massive audience out there with no smartphones.

I also don’t buy the argument that native is a fad and the web will prevail in the long run. We need to get better with the web and its technologies. And we won’t do that by pretending what we did twenty years ago is still great.

There is an argument for leaving old browsers without new functionality. The funny thing is that this is also what I, as a person who doesn’t want to break the web, believe in.

Should we stop pushing the web forward?

A few weeks ago Peter-Paul Koch kicked the hornets’ nest when proposing a one year innovation hiatus of browsers in his “Stop pushing the web forward” post.

He pointed out that there is a problem.

  • We have many standards and proposed solutions to the shortcomings of the web. But all are still far away from implementation across browsers.
  • He also pointed out a problem with adoption speed. None of the proposed standards managed to get any traction within a year.

Web Components is the biggest culprit there. This also was one of the findings of the Web Components/Modules panel of EdgeConf this year. It seems that without libraries, Web Components are more or less unusable right now. This is partly because there is a lot of consensus yet to be found as to what they should achieve.

It is hard to write a standard. It is hard to get consensus and buy-in from various browser vendors and their partners. And it is hard to make sure we don’t put a standard in our browsers that turns out to be less than optimal once we have it in there. We had enough of those already.

This is where JavaScript comes in. It has always been the means of adding upcoming functionality to the browsers of now and the ones of the past.

JavaScript is powerful

The great thing about JavaScript is, that it spans all the layers of web development. You can create both HTML and CSS with it (using the DOM and CSSOM or writing out styles inline). You can do so after you tested for capabilities of the browser and – to a degree – the end user. You can even create images, sounds, 3D environments, well – you name it.

JavaScript also successfully moved away from the client side and powers servers, fat-client applications and APIs. In these environments you control JavaScript engine. Originally this was only V8, but now also Chakra is available as an alternative. This sort of control is great for developers who know what they are doing. It also gives us the wrong impression that we could have the same on the web.

The bad thing about JavaScript is that this gives a lot of power to people too busy to use it in a thorough fashion.

  • User agent sniffing is rampant and woefully wrong.
  • A lot of solutions test for touch support and then assume a smartphone, leaving touch-screen devices with the wrong interface.
  • Many users of libraries trust them to fix all issues without verifying.
  • A lot of user agent sniffing checks for a name of a browser instead of the buggy version, thus making fixing those bugs a futile exercise for this product – it will always stay patched.

There is no doubt, that the use case for JavaScript has changed in the last few years and that – for good or worse – our solutions rely on it. This is OK, if we ensure that only those who can get this functionality. We can not make developer convenience result in empty pages.

Empty pages are empty pages

XHTML was killed off because it was too unforgiving. Any encoding problem in our files would have meant our users got an XML error message instead of our products. That’s why we replaced it with HTML5, which uses a much more forgiving parser.

The same problem applies with JavaScript. It is not fault tolerant. Any error that can happen is fatal to the execution of the program. To make matters worse, even the delivery of JavaScript to our end users is riddled with obstacles, bear traps and moats to cross. If you can’t rely on the delivery of your helper library you can easily end up with an empty page instead of the 60fps goodness we work so hard to deliver.

It is time to fix JavaScript

And we need to change JavaScript. It is virtually possible to do everything in JavaScript and you learn about new things and quirks of the language almost every week. Whilst this is fun, it also holds us back. Our success as developers used to be defined by knowing how browsers mess up. Now that we kind of fixed that our job should not be to know the quirks and oddities of a language – it should be to build maintainable, scalable and performant software products.

Our current attempts to improve JavaScript as a language has a few issues. Older browsers getting an EcmaScript 6 script taking advantage of all the good syntax changes see them as a syntax error and break.

That brings us to an impasse: we want to innovate and extend the language, but we have to deal with the issue of legacy environments. Our current solution is the one we always took in case of confusion: we abstract.

Abstraction languages and transpiling

On first glance, this seems like a great idea: we let computers do what they do best, crunching numbers and converting one thing into another. Using a language like TypeScript or a transpiler like Traceur or Babel we gain a lot of benefits:

  • End users don’t get broken experiences. The transpiler converts ES6 to understandable ES5. They may get a lot more code than they would in an ES6 capable environment, but that’d mean they’d need to change their environment – something people only do for very good reasons. Our products are not reason enough – sorry.
  • Developers can use the terser, cleaner syntax of ES6 right now without worrying about breakage in older browsers
  • We can embrace the more structured approach of classes in JavaScript instead of having to get into the “JavaScript mindset”. This means more developers can build for the web.
  • We control the conversion step – turning ES6 into code that runs across browsers happens in the transpiler; a resource we control. That way we can convert only what is necessary to legacy code and use supported features when they happen.

Alas, the last part doesn’t happen right now. Transpiling as it stands now is slow to work in the browser which is why we do it on the server side. There we can not do any capability testing, which means we convert all the ES6 features to the lowest common denominator. That way any of the native support for ES6 in browsers never gets any traction.

In essence, we added ES6 to browsers for internal use only. Developers write ES6, but always run ES5 in the browser.

This also means that we have the problem for developers that we don’t write the code that runs in the browser, but one level up from that. That makes debugging harder and we need to use sourcemaps to connect errors with the line in our source code that caused it. We might also run into the issue where the code generated by the transpiler is faulty and we can’t do anything about it.

The beauty of the web was its immediate connection between creation and consumption. You wrote the code that ran in the browser. Developer tools in the last years became amazingly sophisticated giving us insights into how our code behaves. With abstractions, we forfeit these beautiful options.

We already missed the boat once when DOM became a thing

Let’s turn back the clock a bit. Those of us who’ve been around when the DOM wasn’t standardised and DHTML was a thing remember clearly how terrible that was.

We rejoiced when we got DOM support and we had one API to write against. We even coined a term called “DOM scripting” to make a clear distinction between the DHTML of old and the “modern” code we write now. All of this was based on the simple principle of progressive enhancement using capability testing.

All you did was wrapping your code in a statement that checked if the “modern” DOM was supported:

if (document.getElementById) {
  // … your code
}

And then you used all the crazy new stuff that made our life so much easier: createElement, appendChild, getElementsByTagName. These were great (until we found innerHTML).

Browsers that didn’t make the cut, didn’t get any JavaScript. This has a few benefits:

  • You move forward, without breaking things – browsers that can not run your code, don’t get it. There is a beautiful simplicity in that.
  • You have a clear cut-off point – you know what you support and what you don’t. This cuts down immensely on testing time of your solutions. As you know an IE6 never gets any JavaScript, there is no need to test on it – if you enhanced progressively.
  • You have a reason to write sensible HTML and plain CSS – you know this is your safety blanket that gets delivered to all browsers. And in most cases, having HTML that works without scripting is a great baseline for your more sophisticated solution that happens once JS does its magic.

It was a great idea, and it got some traction. But then it got replaced by something else: abstraction libraries like jQuery, prototype, mootools, YUI and hundreds of others, most of which forgotten. But sadly enough not removed from old implementations.

It’s a kind of magic: here come the libraries

Abstraction libraries promised (and in some cases still promise) us a lot of things:

  • They sanitised across browsers – under the hood, they do a lot of bug workarounds and forking for different browsers to make things work. This is a lot of effort which resulted in browser bugs never getting fixed.
  • They allow to write less and achieve more – which sounds like a very pragmatic way of working. It also means we create more code than is needed. It doesn’t look much to use a few plugins and write 10 lines of abstraction code to create a product. Under the hood, we made ourselves dependent on a lot of magic. We also have a responsibility to keep our helper libraries up-to-date and test in the browsers we now promise to support. We doubled our responsibilities for the sake of not having to be responsible for working around browser issues.
  • They protected us from having to learn about the DOM – we didn’t need to type in those long names or the convoluted way to add a new element using insertBefore().

There is no doubt that the DOM in hindsight is a terrible API and its implementations are buggy. But there is also no doubt that we allowed it to stay that way by abstracting our issues away. Libraries bred a whole generation of developers who see the browser as something to convert code into, not something to write code for. It also slowed down the demands of fixing problems in browsers.

Nowadays, abstraction libraries of the DOM scripting days are landfill of the web. Many don’t get updated and quite a few new features can not be implemented in a straight-forward fashion in browsers as they would break web products relying on library abstractions with the same names.

Cutting the mustard

The idea of DOM scripting was to test for capabilities and use them instead of simulating them with convoluted replacements that work in older browsers. It removed a lot of the hackiness of the web and unspeakable things like writing out content with document.write() inside inline script elements.

The problem with capability testing is that it can backfire:

  • Support for one feature doesn’t mean others are supported – a lot of times browsers support them in waves, depending on demand and complexity.
  • Browsers lie to you – often there was rudimentary support for an object, but browsers lacked the methods it should have come with
  • You never know what you’ll want to support – and testing for each and every feature is tedious

This is why we started defining other cut-off points. The developers in the BBC called this “cutting the mustard” and – after looking at support charts of browsers and testing the real support – defined the following test as a good one to weed out old browsers:

if('querySelector' in document
     && 'localStorage' in window
     && 'addEventListener' in window) {
     // bootstrap the javascript application
     }

Jake Archibald defined an even more drastic one for mobile products, filtering out both old versions of Internet Explorer and older WebKit on mobiles:

if (!('visibilityState' in document)) return;

You can then layer on more functionality in tests:

if ('visibilityState' in document) { 
  // Modern browser. Let's load JavaScript
  if ('serviceWorker' in navigator) {
    // Let's add offline support
    navigator.serviceWorker.register('sw.js', {
      scope: './'
    });
  }
}

This is great. So here is my proposal. Features of ES6 can be detected – even those that are completely new syntax. Kyle Simpson’s featuretests.io is a pretty comprehensive library full of tests that does exactly that.

How about we make support for a few ES6 features our new “cutting the mustard”?

This results in some good opportunities:

  • We will get real use of ES6 features in browsers – this allows us to improve its performance and find browser issues to fix.
  • We get promises – which not only make async work much easier, but also are the baseline of almost every new API the web really needs (see ServiceWorkers, for example)
  • We are one step closer to real modules in Javascript
  • We will get fetch – a much saner way to load dynamic content than Ajax
  • We have in-built templating

The biggest win: iOS and Safari

Safari has lately become the problem child of the browser space. Many an innovation agreed by other players will fail to get traction if iOS is not on board. It is the stock browser of iOS and no other browser engine is allowed. No matter how much service and interface layer you see in Chrome or Opera for iOS, under the hood ticks the same engine.

And iOS is the golden child of the mobile world: it has the most beautiful devices, the most affluent users not shy about spending money and it doesn’t suffer from the fragmentation issues Android has. Android has larger numbers, but much less revenue per person.

That means what doesn’t run in Safari iOS is not ready to reach the audience the people who pay us deem the most important. Safari is the one and only browser engine on iOS, its roadmap is much foggier than the one of other browsers and its spokespeople are fashionably absent at every public discussion.

This sounds familiar and brings back terrible memories of an Internet Explorer monoculture as explained by Nolan Lawson in Safari is the new IE.

This is not going away any time soon. And many of the standards proposals implemented in Chrome and Firefox are red boxes on caniuse.com in the mobile Safari column.

However, the ES6 support of Mobile Safari is much better.

Can ES6 features make a sensible cut off point?

This is a bold idea. But I think a great one.

  • We have a chance with ES6 to innovate the web much more quickly than we could with other standard proposals that need browser maker agreement.
  • Legacy browsers will never get new APIs, and patching for them with polyfills and libraries results in a mess – better to let them have HTML and CSS
  • This goes hand-in-hand with the extensible web manifesto
  • Using ES6 features in production is the only way to make them perform well in browsers. You can’t optimise what isn’t used.

What do you think? Let’s take a look at support, and define a new “cutting the mustard”, extending this idea from API support to also include syntax changes.

Photo Credit: frankieleon

Read the whole story
nod
1580 days ago
reply
Share this story
Delete

CausalImpact: A new open-source package for estimating causal effects in time series

1 Comment and 2 Shares
How can we measure the number of additional clicks or sales that an AdWords campaign generated? How can we estimate the impact of a new feature on app downloads? How do we compare the effectiveness of publicity across countries?

In principle, all of these questions can be answered through causal inference.

In practice, estimating a causal effect accurately is hard, especially when a randomised experiment is not available. One approach we've been developing at Google is based on Bayesian structural time-series models. We use these models to construct a synthetic control — what would have happened to our outcome metric in the absence of the intervention. This approach makes it possible to estimate the causal effect that can be attributed to the intervention, as well as its evolution over time.

We've been testing and applying structural time-series models for some time at Google. For example, we've used them to better understand the effectiveness of advertising campaigns and work out their return on investment. We've also applied the models to settings where a randomised experiment was available, to check how similar our effect estimates would have been without an experimental control.

Today, we're excited to announce the release of CausalImpact, an open-source R package that makes causal analyses simple and fast. With its release, all of our advertisers and users will be able to use the same powerful methods for estimating causal effects that we've been using ourselves.

Our main motivation behind creating the package has been to find a better way of measuring the impact of ad campaigns on outcomes. However, the CausalImpact package could be used for many other applications involving causal inference. Examples include problems found in economics, epidemiology, or the political and social sciences.

How the package works
The CausalImpact R package implements a Bayesian approach to estimating the causal effect of a designed intervention on a time series. Given a response time series (e.g., clicks) and a set of control time series (e.g., clicks in non-affected markets, clicks on other sites, or Google Trends data), the package constructs a Bayesian structural time-series model with a built-in spike-and-slab prior for automatic variable selection. This model is then used to predict the counterfactual, i.e., how the response metric would have evolved after the intervention if the intervention had not occurred.

As with all methods in causal inference, valid conclusions require us to check for any given situation whether key model assumptions are fulfilled. In the case of CausalImpact, we are looking for a set of control time series which are predictive of the outcome time series in the pre-intervention period. In addition, the control time series must not themselves have been affected by the intervention. For details, see Brodersen et al. (2014).

A simple example
The figure below shows an application of the R package. Based on the observed data before the intervention (black) and a control time series (not shown), the model has computed what would have happened after the intervention at time point 70 in the absence of the intervention (blue).

The difference between the actual observed data and the prediction during the post-intervention period is an estimate of the causal effect of the intervention. The first panel shows the observed and predicted response on the original scale. The second panel shows the difference between the two, i.e., the causal effect for each point in time. The third panel shows the individual causal effects added up in time.
The script used to create the above figure is shown in the left part of the window below. Using package defaults means our analysis boils down to just a single line of code: a call to the function CausalImpact() in line 10. The right-hand side of the window shows the resulting numeric output. For details on how to customize the model, see the documentation.
How to get started
The best place to start is the package documentation. The package is hosted on Github and can be installed using:

install.packages("devtools")
library(devtools)
devtools::install_github("google/CausalImpact")
library(CausalImpact)

By Kay H. Brodersen, Google

Read the whole story
nod
1920 days ago
reply
Share this story
Delete
1 public comment
skorgu
1921 days ago
reply
I assume this is "stuff statisticians do daily dumbed down for people like me" vs Amazing Google Special Sauce.

Drupal Association News: From Poverty to Prosperity: How Drupal is Improving Lives in South Los Angeles

1 Share

Students in Teens Exploring TechnologyFor many people all over the world, Drupal is a fun hobby or even a means to a career. But for some young men in South Los Angeles, it’s more than that: it’s a ticket to a better life.

Teens Exploring Technology is the brainchild of Oscar Menjivar, a social entrepreneur, programmer, and Drupal user. The program serves young men who are at risk of recruitment by gangs in Los Angeles’ southern neighborhoods by bringing them off the streets and educating them on community, leadership, academics, and technology.

Each year, thirty or more high-school boys are selected to participate in the program. Through it, they are introduced to computers and computing, and attend weekly classes held by the program and hosted in one of the classrooms at the University of Southern California (USC). Classes are instructed by volunteers who donate their time and expertise to the program, teaching the boys to improve their lives and their community through technological innovation.

“Currently, we partner with USC but we are starting to look at other universities for expansion” said Menjivar. “Our program is in demand and we need to expand. Right now, we’re building relationships with other universities, so in the next few years we’ll probably be meeting at USC and another university in the area."

The program, which is completely free for its students, has already made waves in its local community. Numerous alumni of Teens Exploring Technology are currently studying Computer Science and Information Systems at schools such as Stanford, Syracuse, USC, University of California Los Angeles (UCLA), and elsewhere; the projects that these students completed while participating in Teens Exploring Technology, meanwhile, are still doing good in their communities.

“Last year, one of the groups developed a Drupal website called South LA Run,” said Menjivar. "It’s an interactive map that displays safe places where people in south central LA can run. The site allows users to make accounts, and create and share routes with each other. Our students collected data and research from the community in South LA, then used it to build the site, which launched last summer.

"The project perfectly embodied our mission to help the kids recognize some of the problems in their communities, identify ways they can solve these problems, and give them the resources to solve those problems with technology,” Menjivar added.

Fighting poverty with technology

The program, which has won a Google Rise award, was inspired by Menjivar’s past.

"I grew up in South Los Angeles in the ‘90s and went to one of the worst high schools in the city,” said Menjivar. “They promised me a technology magnet program, but at the time we had nothing but typing classes. The lack of resources at my school made it harder for me to focus on bigger goals in life, like college.”

From a young age, Menjivar had been interested in computing and computer science. "I wanted to do computer science and learn how to code, and [my upbringing] was a huge barrier for me to overcome. Luckily, I had a good friend in college who took me under his wing.” Now, Menjivar is paying the favor forward by giving young men in rough neighborhoods the same help that he once received.

“Seven or eight years ago, I went back to my old high school and spoke to sixty kids. I asked if they knew what a website was, or knew what HTML was, and out of these kids only 5 of them knew what that meant. That was what opened my eyes,” he said. “I thought, there’s something that we need to do about this.”

For most young men who live in the inner cities, survival can be difficult. Many are recruited by gangs, or turn to crime to keep money coming in. "The biggest problem that I encountered with myself was that, in the '90s I had a lot of friends who… one ended up shot, another ended up in jail, and most didn’t go to college,” said Menjivar. "I was lucky because I had good mentors, but most of my friends didn’t have the same opportunity."

Now, Teens Exploring Technology is serving the neighborhood that Menjivar himself grew up in. The program focuses on educating young at-risk men about technology, inspiring them to use technology for social good, and instilling high-integrity values in the process. But Menjivar doesn’t want to stop there.

"The overall vision for what we’re doing is to develop leaders and change makers who can improve world through technology,” Menjivar said. “We want our students to go and use technology for good, and develop solutions for their communities. Our main focus is always on addressing problems in our students’ community, specifically how can we use technology to transform the lives of kids.”

Doing good with Drupal

In the Teens Exploring Technology program, the participants are introduced to a wide range of technologies— and Drupal is by far the most popular.

“We decided to use Drupal because it gives the kids a chance to learn on the spot and not have to wait for something to be pushed out,” Menjivar said. “They can practice their coding skills, and if they make a mistake they can redo it again easily in Drupal. The flexibility of it, the modules that the kids can play with, and the themes that Drupal can do all make it very popular. With kids, you have to be able to give them a choice for how to customize their website and make it their own, and Drupal does that really well.

“Last year, we had 8 different web apps and I would say 4 of them were Drupal-based. The other ones were Wordpress, Android, iPhone, and Shortstack, which is a Facebook app. This year we’re throwing in Unity, so the kids will be able to build games.

“Every year we experiment a lot but Drupal always stays at the core of what we do,” said Menjivar.

How Teens Exploring Technology is changing South Los Angeles

The pilot program for Teens Exploring Technology began five years ago.

“At first, we did recruiting,” said Menjivar. “We went out into the community and approached kids about participating in the program that first year, but it’s all word of mouth now. The kids call themselves TxTrs, and they really spread the word. It happens often that, in schools, an 8th or 9th grader will come to a current student and say 'I want to do this, how do I do this.’

“In the community, we feel that people are starting to recognize potential with technology. We had 150 applications this past year, and even though we were only able to pick 45 participants, we’ve created a database of kids who didn’t get in and their parents. We reach out to give them information whenever we can, and pretty soon we’ll have an open space that we’re opening up so that everyone can come, build with technology, and take workshops on different tools,” Menjivar added.

Helping at-risk young men build better lives for themselves and for their communities is at the heart of what Menjivar does— but he doesn’t plan to stop just with Teens Exploring Technology. Currently, the Teens Exploring Technology team is working to expand the program so that everyone in South Los Angeles has an opportunity to learn and grow.

“We’re about to open the first ever hacker/tech space in South LA where people in the community — not just boys but everyone else, girls, older people -- can come and learn how to develop and learn to make web apps,” Menjivar said. "We’re excited about it. We’ll be helping people learn about CSS, HTML, Javascript, and other different platforms. It's a huge step for us because we’ll be able to do summer programs with the boys in Teens Exploring Technology,” Menjivar added, “and then take those concepts over to our Hackerspace and encourage the community to initiate change through technology."

Menjivar’s vision for the Hackerspace isn’t one of a formal classroom, but rather a safe space for knowledge-sharing where people can help each other out-- or, in his words, “We want a ‘learn by doing’ space.”

“We want to build an organic community of technology culture so people can come in and do peer to peer teaching,” Menjivar said. “We want it to be a place where you can come hang out and have fun while learning to build online products. We aim to build culture of knowledge using the latest dev tools.”

“I find that the best way to build knowledge is together, instead of just doing workshops all the time,” Menjivar added.

“When we began setting the place up, picture a big mess right in the middle of the room: chairs everywhere and stucco and paint all over the place,” said Menjivar. “People came in and asked us what we were doing, and when we told them they could come and learn to develop, they got excited. In fact, as soon as we announced the Hackerspace to the community, we had tons of people coming in and asking how they could get involved.

“The community in South LA has a lot of talent, but it just isn’t being nurtured and fostered. So that’s what we want to do,” said Menjivar.

Getting Involved

Alumni of the Teens Exploring Technology program give back to the program by donating their expertise and recruiting for the community— but the program’s expansion means that more help is needed.

“Right now, we’ve got a summer leadership academy going on for boys who are between 14 and 17 years old,” said Menjivar. “We put the kids in production and development groups, and then everyone picks a different role: product developer, project manager, and so on. The boys go through process of identifying a problem and then using the technology to solve that problem, and to make this happen, we need mentors.

“Finding volunteers with exceptional skills is critical. We don’t just want people to volunteer, we want them to build relationships. Our volunteers become role models to the kids, become people they can look up to. Finding volunteers who can commit an hour to the program, and who are willing to stay in touch with the kids afterwards, can be a challenge.”

Beyond the need for more volunteers, resources are tight with the program. “getting funding is a challenge, especially since it’s a completely free program for the students,” said Menjivar. “Many of the boys we serve are from low income families, families whose annual income is about $15,000. In order for us to serve more students and provide new opportunities, we need to increase our income. This year we were invited to a startup weekend but we didn’t have transportation so going was difficult. Funding is definitely a challenge.”

“One of the questions we ask ourselves a lot is, how do we use this program to continue helping the Drupal community grow, and how do we get the Drupal community more involved in the future? One thing that would help would be sponsorship from companies for the program and for its volunteers.

“Our summer volunteers put in 20-25 hours a week helping the boys, and do so for no pay. Right now we’re looking for people or companies who can sponsor those volunteers, and maybe even give them a stipend,” said Menjivar.

"Currently, the culture of creating technology doesn’t exist in South L.A., so we’re building that technical dream and people are recognizing that. We’ve become the place where, if you want to learn to build or create, you go to Teens Exploring Technology or you go to Hackerspace. It’s a small space but I’m looking forward to seeing what comes out of it,” said Menjivar.

"Above all, the emphasis for me is our pillars of community, leadership, academics, and technology, because that’s what we anchor ourselves around. We want to help our kids understand how those pillars change the world, and really understand the technology that will make a difference in their lives and the lives of others as they become a developer."

For more information on the program, or to get involved, please contact the Teens Exploring Technology team, or reach out to Oscar Menjivar via Twitter at @urbantxt.

Read the whole story
nod
1955 days ago
reply
Share this story
Delete
Next Page of Stories