Introduktion till maskininlärning med TensorFlow 2.0

Maskininlärning (Machine Learning, ML) representerar ett nytt paradigm i programmering, där du istället för att programmera explicita regler på ett språk som Java eller C ++, bygger ett system som tränas och lärs upp på data från ett stort antal exempel, för att sedan kunna dra slutsatser av ny data baserat på de mönster som identifierats utifrån träningsdatat.
Men hur ser ML egentligen ut? I del ett av Machine Learning Zero to Hero går AI-evangelisten Laurence Moroney (lmoroney @) genom ett grundläggande Hello World-exempel på hur man bygger en ML-modell och introducerar idéer som vi kommer att tillämpa i det senare avsnittet om datorseende (Computer Vision) längre ner på denna sida.
Vill du ha en lite mer omfattande introduktion rekommenderar jag Introduction to TensorFlow 2.0: Easier for beginners, and more powerful for experts.

(40:55)

Intro to Machine Learning (ML Zero to Hero, part 1)

Prova själv den här koden i Hello World of Machine Learning: https://goo.gle/2Zp2ZF3

Basic Computer Vision with ML (ML Zero to Hero, part 2)

I del två av Machine Learning Zero to Hero går AI-evengalisten Laurence Moroney (lmoroney @) genom grundläggande datorseende (Computer Vision) med maskininlärning genom att lära en dator hur man ser och känner igen olika objekt (Object Recognition).

Fashion MNIST – ett dataset med bilder på kläder för benchmarking

Fashion-MNIST är ett forskningsprojekt av Kashif Rasul & Han Xiao i form av ett dataset av Zalandos artikelbilder. Det består av ett träningsset med 60 000 bildexempel och en testuppsättning med 10 000 exempel. Varje exempel är en 28 × 28 pixlar stor gråskalabild, associerad med en etikett från 10 klasser (klädkategorier).
Fashion-MNIST är avsett att fungera som en direkt drop-in-ersättning av det ursprungliga MNIST-datasättet för benchmarking av maskininlärningsalgoritmer.

fashion-mnist-sprite

Fashion MNIST dataset

Varför är detta av intresse för det vetenskapliga samfundet?

Det ursprungliga MNIST-datasättet innehåller många handskrivna siffror. Människor från AI / ML / Data Science community älskar detta dataset och använder det som ett riktmärke för att validera sina algoritmer. Faktum är att MNIST ofta är det första datasetet de provar på. ”Om det inte fungerar på MNIST, fungerar det inte alls”, sägs det. ”Tja, men om det fungerar på MNIST, kan det fortfarande misslyckas med andra.”

MNIST Dataset för nummerklassificering

Fashion-MNIST är avsett att tjäna som en direkt drop-in ersättning för det ursprungliga MNIST-datasetet för att benchmarka maskininlärningsalgoritmer, eftersom det delar samma bildstorlek och strukturen för tränings- och testdelningar.

Varför ska man ersätta MNIST med Fashion MNIST? Här är några goda skäl:

GitHub:

Find detailed information and the data set on GitHub

Här är ett exempel på datorseende som du kan testa själv: https://goo.gle/34cHkDk

Se mer om att koda TensorFlow → https://bit.ly/Coding-TensorFlow
Prenumerera på TensorFlow-kanalen → http://bit.ly/2ZtOqA3

Introducing convolutional neural networks (ML Zero to Hero, part 3)

I del tre av Machine Learning Zero to Hero diskuterar AI-evangelisten Laurence Moroney (lmoroney @) CNN-nätverk (Convolutional Neural Networks) och varför de är så kraftfulla i datorseende-scenarier. En ”convolution” är ett filter som passerar över en bild, bearbetar den och extraherar funktioner eller vissa kännetecken (features) i bilden. I den här videon ser du hur de fungerar genom att bearbeta en bild för att se om du kan hitta specifika kännetecken (features) i bilden.

Codelab: Introduktion till invändningar → http://bit.ly/2lGoC5f


Introducing convolutional neural networks (ML Zero to Hero, part 3)

Build an image classifier (ML Zero to Hero, part 4)

I del fyra av Machine Learning Zero to Hero diskuterar AI-evangelisten Laurence Moroney (lmoroney @) byggandet av en bildklassificerare för sten, sax och påse. I avsnitt ett visade vi ett scenario med sten, sax och påse, och diskuterade hur svårt det kan vara att skriva kod för att upptäcka och klassificera dessa. I de efterföljande avsnitten har vi lärt oss hur man bygger neurala nätverk för att upptäcka mönster av pixlarna i bilderna, att klassificera dem, och att upptäcka vissa kännetecken (features) med hjälp av bildklassificeringssystem med ett CNN-nätverk (Convolutional Neural Network). I det här avsnittet har vi lagt all information från de tre första delarna av serien i en.

Colab anteckningsbok: http://bit.ly/2lXXdw5
Rock, papper, saxdatasätt: http://bit.ly/2kbV92O


Build an image classifier (ML Zero to Hero, part 4)

Grunderna i 3D-modellering

I denna tutorial får du lära dig en metod för att modellera nästan vad som helst i 3D. När du ska skissa, rita eller 3D-modellera ett objekt kan du kombinera och förändra de fyra grundformerna plan, kub, sfär och cylinder. I filmklippet används programvaran Blender, men samma principer gäller för alla 3D-programvaror.

How to Model Anything in 3D – Modeling Fundamentals (10:59)



Vad är neurala nätverk och maskininlärning?

Det verkar som om allt drivs av AI nuförtiden. Det handlar dock sällan om speciellt intelligenta system, eller riktig Artificiell Intelligens. AI används mest som en marknadsföringsterm, för det som oftast är maskininlärning (Machine Learning – ML) och tekniker som Neural Networks (NN). Dessa termer kan verka lite skrämmande och svåra, men de är faktiskt inte så komplexa som du kanske tror.

I följande filmklipp ges en enkel och tydlig förklaring till vad neurala nätverk och maskininlärning är, hur det fungerar och vad vi kan använda det till.

Programmera biologiska celler – nästa mjukvarurevolution

The next software revolution – programming biological cells

00:04
The second half of the last century was completely defined by a technological revolution: the software revolution. The ability to program electrons on a material called silicon made possible technologies, companies and industries that were at one point unimaginable to many of us, but which have now fundamentally changed the way the world works. The first half of this century, though, is going to be transformed by a new software revolution: the living software revolution. And this will be powered by the ability to program biochemistry on a material called biology. And doing so will enable us to harness the properties of biology to generate new kinds of therapies, to repair damaged tissue, to reprogram faulty cells or even build programmable operating systems out of biochemistry. If we can realize this — and we do need to realize it — its impact will be so enormous that it will make the first software revolution pale in comparison.

01:11
And that’s because living software would transform the entirety of medicine, agriculture and energy, and these are sectors that dwarf those dominated by IT. Imagine programmable plants that fix nitrogen more effectively or resist emerging fungal pathogens, or even programming crops to be perennial rather than annual so you could double your crop yields each year. That would transform agriculture and how we’ll keep our growing and global population fed. Or imagine programmable immunity, designing and harnessing molecular devices that guide your immune system to detect, eradicate or even prevent disease. This would transform medicine and how we’ll keep our growing and aging population healthy.

01:59
We already have many of the tools that will make living software a reality. We can precisely edit genes with CRISPR. We can rewrite the genetic code one base at a time. We can even build functioning synthetic circuits out of DNA. But figuring out how and when to wield these tools is still a process of trial and error. It needs deep expertise, years of specialization. And experimental protocols are difficult to discover and all too often, difficult to reproduce. And, you know, we have a tendency in biology to focus a lot on the parts, but we all know that something like flying wouldn’t be understood by only studying feathers. So programming biology is not yet as simple as programming your computer. And then to make matters worse, living systems largely bear no resemblance to the engineered systems that you and I program every day. In contrast to engineered systems, living systems self-generate, they self-organize, they operate at molecular scales. And these molecular-level interactions lead generally to robust macro-scale output. They can even self-repair.

03:07
Consider, for example, the humble household plant, like that one sat on your mantelpiece at home that you keep forgetting to water. Every day, despite your neglect, that plant has to wake up and figure out how to allocate its resources. Will it grow, photosynthesize, produce seeds, or flower? And that’s a decision that has to be made at the level of the whole organism. But a plant doesn’t have a brain to figure all of that out. It has to make do with the cells on its leaves. They have to respond to the environment and make the decisions that affect the whole plant. So somehow there must be a program running inside these cells, a program that responds to input signals and cues and shapes what that cell will do. And then those programs must operate in a distributed way across individual cells, so that they can coordinate and that plant can grow and flourish.

03:59
If we could understand these biological programs, if we could understand biological computation, it would transform our ability to understand how and why cells do what they do. Because, if we understood these programs, we could debug them when things go wrong. Or we could learn from them how to design the kind of synthetic circuits that truly exploit the computational power of biochemistry.

04:25
My passion about this idea led me to a career in research at the interface of maths, computer science and biology. And in my work, I focus on the concept of biology as computation. And that means asking what do cells compute, and how can we uncover these biological programs? And I started to ask these questions together with some brilliant collaborators at Microsoft Research and the University of Cambridge, where together we wanted to understand the biological program running inside a unique type of cell: an embryonic stem cell. These cells are unique because they’re totally naïve. They can become anything they want: a brain cell, a heart cell, a bone cell, a lung cell, any adult cell type. This naïvety, it sets them apart, but it also ignited the imagination of the scientific community, who realized, if we could tap into that potential, we would have a powerful tool for medicine. If we could figure out how these cells make the decision to become one cell type or another, we might be able to harness them to generate cells that we need to repair diseased or damaged tissue. But realizing that vision is not without its challenges, not least because these particular cells, they emerge just six days after conception. And then within a day or so, they’re gone. They have set off down the different paths that form all the structures and organs of your adult body.

05:51
But it turns out that cell fates are a lot more plastic than we might have imagined. About 13 years ago, some scientists showed something truly revolutionary. By inserting just a handful of genes into an adult cell, like one of your skin cells, you can transform that cell back to the naïve state. And it’s a process that’s actually known as ”reprogramming,” and it allows us to imagine a kind of stem cell utopia, the ability to take a sample of a patient’s own cells, transform them back to the naïve state and use those cells to make whatever that patient might need, whether it’s brain cells or heart cells.

06:30
But over the last decade or so, figuring out how to change cell fate, it’s still a process of trial and error. Even in cases where we’ve uncovered successful experimental protocols, they’re still inefficient, and we lack a fundamental understanding of how and why they work. If you figured out how to change a stem cell into a heart cell, that hasn’t got any way of telling you how to change a stem cell into a brain cell. So we wanted to understand the biological program running inside an embryonic stem cell, and understanding the computation performed by a living system starts with asking a devastatingly simple question: What is it that system actually has to do?

07:13
Now, computer science actually has a set of strategies for dealing with what it is the software and hardware are meant to do. When you write a program, you code a piece of software, you want that software to run correctly. You want performance, functionality. You want to prevent bugs. They can cost you a lot. So when a developer writes a program, they could write down a set of specifications. These are what your program should do. Maybe it should compare the size of two numbers or order numbers by increasing size. Technology exists that allows us automatically to check whether our specifications are satisfied, whether that program does what it should do. And so our idea was that in the same way, experimental observations, things we measure in the lab, they correspond to specifications of what the biological program should do.

08:02
So we just needed to figure out a way to encode this new type of specification. So let’s say you’ve been busy in the lab and you’ve been measuring your genes and you’ve found that if Gene A is active, then Gene B or Gene C seems to be active. We can write that observation down as a mathematical expression if we can use the language of logic: If A, then B or C. Now, this is a very simple example, OK. It’s just to illustrate the point. We can encode truly rich expressions that actually capture the behavior of multiple genes or proteins over time across multiple different experiments. And so by translating our observations into mathematical expression in this way, it becomes possible to test whether or not those observations can emerge from a program of genetic interactions.

08:55
And we developed a tool to do just this. We were able to use this tool to encode observations as mathematical expressions, and then that tool would allow us to uncover the genetic program that could explain them all. And we then apply this approach to uncover the genetic program running inside embryonic stem cells to see if we could understand how to induce that naïve state. And this tool was actually built on a solver that’s deployed routinely around the world for conventional software verification. So we started with a set of nearly 50 different specifications that we generated from experimental observations of embryonic stem cells. And by encoding these observations in this tool, we were able to uncover the first molecular program that could explain all of them.

09:43
Now, that’s kind of a feat in and of itself, right? Being able to reconcile all of these different observations is not the kind of thing you can do on the back of an envelope, even if you have a really big envelope. Because we’ve got this kind of understanding, we could go one step further. We could use this program to predict what this cell might do in conditions we hadn’t yet tested. We could probe the program in silico.

10:08
And so we did just that: we generated predictions that we tested in the lab, and we found that this program was highly predictive. It told us how we could accelerate progress back to the naïve state quickly and efficiently. It told us which genes to target to do that, which genes might even hinder that process. We even found the program predicted the order in which genes would switch on. So this approach really allowed us to uncover the dynamics of what the cells are doing.

10:39
What we’ve developed, it’s not a method that’s specific to stem cell biology. Rather, it allows us to make sense of the computation being carried out by the cell in the context of genetic interactions. So really, it’s just one building block. The field urgently needs to develop new approaches to understand biological computation more broadly and at different levels, from DNA right through to the flow of information between cells. Only this kind of transformative understanding will enable us to harness biology in ways that are predictable and reliable.

11:12
But to program biology, we will also need to develop the kinds of tools and languages that allow both experimentalists and computational scientists to design biological function and have those designs compile down to the machine code of the cell, its biochemistry, so that we could then build those structures. Now, that’s something akin to a living software compiler, and I’m proud to be part of a team at Microsoft that’s working to develop one. Though to say it’s a grand challenge is kind of an understatement, but if it’s realized, it would be the final bridge between software and wetware.

11:48
More broadly, though, programming biology is only going to be possible if we can transform the field into being truly interdisciplinary. It needs us to bridge the physical and the life sciences, and scientists from each of these disciplines need to be able to work together with common languages and to have shared scientific questions.

12:08
In the long term, it’s worth remembering that many of the giant software companies and the technology that you and I work with every day could hardly have been imagined at the time we first started programming on silicon microchips. And if we start now to think about the potential for technology enabled by computational biology, we’ll see some of the steps that we need to take along the way to make that a reality. Now, there is the sobering thought that this kind of technology could be open to misuse. If we’re willing to talk about the potential for programming immune cells, we should also be thinking about the potential of bacteria engineered to evade them. There might be people willing to do that. Now, one reassuring thought in this is that — well, less so for the scientists — is that biology is a fragile thing to work with. So programming biology is not going to be something you’ll be doing in your garden shed. But because we’re at the outset of this, we can move forward with our eyes wide open. We can ask the difficult questions up front, we can put in place the necessary safeguards and, as part of that, we’ll have to think about our ethics. We’ll have to think about putting bounds on the implementation of biological function. So as part of this, research in bioethics will have to be a priority. It can’t be relegated to second place in the excitement of scientific innovation.

13:26
But the ultimate prize, the ultimate destination on this journey, would be breakthrough applications and breakthrough industries in areas from agriculture and medicine to energy and materials and even computing itself. Imagine, one day we could be powering the planet sustainably on the ultimate green energy if we could mimic something that plants figured out millennia ago: how to harness the sun’s energy with an efficiency that is unparalleled by our current solar cells. If we understood that program of quantum interactions that allow plants to absorb sunlight so efficiently, we might be able to translate that into building synthetic DNA circuits that offer the material for better solar cells. There are teams and scientists working on the fundamentals of this right now, so perhaps if it got the right attention and the right investment, it could be realized in 10 or 15 years.

14:18
So we are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter in the era of an operating system that runs living software.

Tips för bättre 3D-utskrifter med Cura

Cura är ett open-source slicer-program. Det är ursprungligen utvecklat av företaget Ultimaker som en slicer till deras 3D-skrivare, men fungerar lika bra till nästan alla andra 3D-skrivare på marknaden. Det är ett av de mest populära programmen för detta ändamål.

För att skriva ut saker i en en 3D-skrivare behövs ett så kallat slicer-program. Det är ett program som gör om en STL-fil till G-code som 3D-skrivaren kan tolka. Slicern skivar upp 3D-modellen i olika lager där G-coden bestämmer var det ska läggas ut filament och hur.
Här är ett antal parametrar man kan justera i inställningarna för sin 3D-utskrift:

  • Layer height: Höjden på varje lager i utskriften
  • Wall thickness: Tjockleken på väggarna i utskriften
  • Top/bottom thickness: Tjockleken på “tak” och “golv” i utskriften
  • Infill density: Hur mycket av insidan ska täckas upp med material?
  • Printing temperature: Temperatur på munstycke – beror på vilket material vi väljer
  • Build plate temperature: Temperatur på byggplatta – beror på vilket material vi väljer
  • Travel speed: Hur fort munstycket rör sig när det inte extruderar plast
  • Enable print cooling: Om vi vill kyla plasten med “part cooling fan” när det kommer ut
  • Build plate adhesion: Olika metoder för att få bättre fäste vid byggplatta
  • Enable supports: Om det automatiskt ska genereras stödstrukturer
  • Support placement: Vart stödstrukturer ska genereras
  • Support density: Hur mycket stödstrukturer ska genereras
  • Support overhang angle: Vart stödstrukturer ska genereras
  • Print speed: Utskriftshastighet – hur fort munstycket rör sig när det extruderar plast

I nedanstående filmklipp visar Chuck dig tre Cura Slicer-inställningstrick för nybörjare som han använder på sina ENDER 3 och CR-10 Mini hela tiden. Dessa Cura-tricks är särskilt användbara för alla som precis kommit igång med 3D-printing.

3 Cura Slicer Setting Tricks For Beginners (8:19)

Support eller stödstrukturer

Om ett 3D-objekt du vill skriva ut har delar med överhäng behöver du använda supportmaterial eller s k stödstrukturer. Det finns olika sätt att lägga till stödstrukturer till ett en 3D-modell. Hur du ställer in Cura Tree Supports (trädstöd) och Simple Support och vilka inställningar du kan göra hittar du i nedanstånde avsnitt av Filament Friday med Chuck. Han använder en enkel testutskrift för att se vilket Cura-stöd som fungerar bäst och varför. Han visar hur enkla de är att ta bort och hur bra utskriften ser ut när du är klar.

CURA – Tree Supports vs Standard Supports (8:27)

Med en plugin till Cura går det att skapa mer precisa manuella stödstrukturer bara just där du vill ha dem, så att du kan spara plast och snabba upp utskrifterna jämfört mot att använda de automatiska verktygen.

Custom Manual Supports in Cura 4.3 (7:45)

Build plate adhesion – Skirt, Brim and Raft


Build plate adhesion – Skrit, Brim, Raft (3:16)

How to get your 3d printed parts to stick to the bed and avoid curling/warping

(15:31)

Wikipedia crash course

Här är en crash course i att använda Wikipedia för att söka, hitta och navigera rätt bland digital information på webben.
Wikipedia är en källa som ofta nedvärderas av lärare och twittertroll som en opålitlig källa. Och ja, det finns ibland stora fel och utelämnanden, men Wikipedia är också Internets största allmänna referensverk och som sådant ett otroligt kraftfullt verktyg.
I följande filmklipp får du tips på hur du kan använda Wikipedia för ett gott syfte – för att hjälpa dig att få ett fågelperspektiv över ett visst innehåll, bättre kunna utvärdera information med lateral läsning och hitta pålitliga primära källor.

Using Wikipedia: Crash Course Navigating Digital Information #5 (14:15)

Vad är lateral läsning?

Lateral läsning är en lässtrategi som lämpar sig betydligt bättre för informationssökning på webben än traditionell vertikal läsning som man gör i tryckta böcker eller papperstidningar där man läser varje sida uppifrån och ner. Det handlar istället om att hoppa horisontellt mellan olika webbläsarflikar och läsa om en viss sak på flera olika sidor för att snabbt få en överblick från flera olika perspektiv.
Risken med att läsa vertikalt på enskilda webbsidor och bara leta efter tecken på om källan verkar seriös och trovärdig på den aktuella webbsidan kan sammanfattas i följande citat: ”Reading that way gives misinformation and disinformation more power. It allows people to hijack your consciousness, and it also makes you part of the problem.”


Check Yourself with Lateral Reading: Crash Course Navigating Digital Information #3 (13:51)

Mer information om lateral läsning kan du läsa i forskningsrapporten Lateral Reading: Reading Less and Learning More When Evaluating Digital Information från Wineburg, Sam and mcgrew, sarah, (October 6, 2017). Stanford History Education Group Working Paper No. 2017-A1 . Available at SSRN: https://ssrn.com/abstract=3048994 or http://dx.doi.org/10.2139/ssrn.3048994

Machine Learning för webben

Med TensorFlow.js kan du snabbt och enkelt skapa webbapplikationer som använder Artificiell Intelligens (AI) och Machine Learning (ML) med ett fåtal rader JavaScript-kod.
Det finns en hel del färdigbyggda och förtränade ML-modeller med JavaScript API:er som du kan använda direkt för tillämpningar som t ex:
Image Classification
Image Segmentation
Object Detection
Pose Detection
Speech Commands
Text Classifications
Augmented Reality
Gesture-based interaction
Speech recognition
Accessible web apps
Sentiment analysis, abuse detection
Conversational AI
Web page optimization
m.m.

Machine Learning magic for your web application with TensorFlow.js (Chrome Dev Summit 2019) 8:30






Learn more: TensorFlow.js → https://goo.gle/2XLhMe0 Tensorflow.js Github → https://goo.gle/2DcgLCe#ChromeDevSummit All Sessions → https://goo.gle/CDS19

The values and principles of the Agile Manifesto

The Four Values of The Agile Manifesto

The Agile Manifesto is comprised of four foundational values and 12 supporting principles which lead the Agile approach to software development. Each Agile methodology applies the four values in different ways, but all of them rely on them to guide the development and delivery of high-quality, working software.

1. Individuals and Interactions Over Processes and Tools
The first value in the Agile Manifesto is “Individuals and interactions over processes and tools.” Valuing people more highly than processes or tools is easy to understand because it is the people who respond to business needs and drive the development process. If the process or the tools drive development, the team is less responsive to change and less likely to meet customer needs. Communication is an example of the difference between valuing individuals versus process. In the case of individuals, communication is fluid and happens when a need arises. In the case of process, communication is scheduled and requires specific content.

2. Working Software Over Comprehensive Documentation
Historically, enormous amounts of time were spent on documenting the product for development and ultimate delivery. Technical specifications, technical requirements, technical prospectus, interface design documents, test plans, documentation plans, and approvals required for each. The list was extensive and was a cause for the long delays in development. Agile does not eliminate documentation, but it streamlines it in a form that gives the developer what is needed to do the work without getting bogged down in minutiae. Agile documents requirements as user stories, which are sufficient for a software developer to begin the task of building a new function.
The Agile Manifesto values documentation, but it values working software more.

3. Customer Collaboration Over Contract Negotiation
Negotiation is the period when the customer and the product manager work out the details of a delivery, with points along the way where the details may be renegotiated. Collaboration is a different creature entirely. With development models such as Waterfall, customers negotiate the requirements for the product, often in great detail, prior to any work starting. This meant the customer was involved in the process of development before development began and after it was completed, but not during the process. The Agile Manifesto describes a customer who is engaged and collaborates throughout the development process, making. This makes it far easier for development to meet their needs of the customer. Agile methods may include the customer at intervals for periodic demos, but a project could just as easily have an end-user as a daily part of the team and attending all meetings, ensuring the product meets the business needs of the customer.

4. Responding to Change Over Following a Plan
Traditional software development regarded change as an expense, so it was to be avoided. The intention was to develop detailed, elaborate plans, with a defined set of features and with everything, generally, having as high a priority as everything else, and with a large number of many dependencies on delivering in a certain order so that the team can work on the next piece of the puzzle.

With Agile, the shortness of an iteration means priorities can be shifted from iteration to iteration and new features can be added into the next iteration. Agile’s view is that changes always improve a project; changes provide additional value.

Perhaps nothing illustrates Agile’s positive approach to change better than the concept of Method Tailoring, defined in An Agile Information Systems Development Method in use as: “A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments.” Agile methodologies allow the Agile team to modify the process and make it fit the team rather than the other way around.

The Twelve Agile Manifesto Principles

The Twelve Principles are the guiding principles for the methodologies that are included under the title “The Agile Movement.” They describe a culture in which change is welcome, and the customer is the focus of the work. They also demonstrate the movement’s intent as described by Alistair Cockburn, one of the signatories to the Agile Manifesto, which is to bring development into alignment with business needs.

The twelve principles of agile development include:

  1. Customer satisfaction through early and continuous software delivery – Customers are happier when they receive working software at regular intervals, rather than waiting extended periods of time between releases.
  2. Accommodate changing requirements throughout the development process – The ability to avoid delays when a requirement or feature request changes.
  3. Frequent delivery of working software – Scrum accommodates this principle since the team operates in software sprints or iterations that ensure regular delivery of working software.
  4. Collaboration between the business stakeholders and developers throughout the project – Better decisions are made when the business and technical team are aligned.
  5. Support, trust, and motivate the people involved – Motivated teams are more likely to deliver their best work than unhappy teams.
  6. Enable face-to-face interactions – Communication is more successful when development teams are co-located.
  7. Working software is the primary measure of progress – Delivering functional software to the customer is the ultimate factor that measures progress.
  8. Agile processes to support a consistent development pace –Teams establish a repeatable and maintainable speed at which they can deliver working software, and they repeat it with each release.
  9. Attention to technical detail and design enhances agility – The right skills and good design ensures the team can maintain the pace, constantly improve the product, and sustain change.
  10. Simplicity – Develop just enough to get the job done for right now.
  11. Self-organizing teams encourage great architectures, requirements, and designs – Skilled and motivated team members who have decision-making power, take ownership, communicate regularly with other team members, and share ideas that deliver quality products.
  12. Regular reflections on how to become more effective – Self-improvement, process improvement, advancing skills, and techniques help team members work more efficiently.

The intention of Agile is to align development with business needs, and the success of Agile is apparent. Agile projects are customer focused and encourage customer guidance and participation. As a result, Agile has grown to be an overarching view of software development throughout the software industry and an industry all by itself.


Kent Beck
Mike Beedle
Arie van Bennekum
Alistair Cockburn
Ward Cunningham
Martin Fowler
James Grenning
Jim Highsmith
Andrew Hunt
Ron Jeffries
Jon Kern
Brian Marick
Robert C. Martin
Steve Mellor
Ken Schwaber
Jeff Sutherland
Dave Thomas

© 2001, the above authors

Agil systemutveckling

Att bedriva utveckling med agila metoder innebär att man arbetar iterativt och inkrementellt med många små och snabba delleveranser i regelbundet korta intervaller.
Arbetssättet är flexibelt och betonar snabb­­het, in­formellt sam­arbete, täta kund­kontakter och möjlig­het att ändra under arbetets gång. (Se agil.) Även krav­specifikationen bör kunna revideras under projektets gång eftersom behov, önskemål och förutsättningar kan förändras över tid.
Det är mer att betrakta som en rörelse, inte en en­­hetlig metod.
Manifestet för agil system­­utveckling (länk) publicerades 2001 av en grupp pro­grammerare som hade reagerat på strävan efter detaljerade kravspecifikationer, omfattande dokumentation och byråkratiserande metoder och processer som var resultatet av den traditionella projektmodellen ”vattenfallsmetoden”. De bildade Agile Alliance (länk), och har sedan dess utvecklat verk­tyg och andra hjälp­medel.
Scrum och Kanban är två vanliga agila metoder som hjälper projektteam att prioritera, synliggöra arbete och framsteg och minska flaskhalsar i produktionen.
Klicka här för en artikel om Kanban.

Klicka här för en artikel som beskriver grunderna i Scrum.

Klicka här för att läsa ett blogginlägg om vad det innebär att arbeta agilt.

Agila manifestet består av följande fyra grundläggande värden och 12 stödjande principer som leder den agila strategin för mjukvaruutveckling. Varje agil metodik tillämpar de fyra värdena på olika sätt, men alla litar på dem för att vägleda utvecklingen och leveransen av högkvalitativ, fungerande programvara.

Vi finner bättre sätt att utveckla programvara genom att utveckla själva och hjälpa andra att utveckla. Genom detta arbete har vi kommit att värdesätta:

1. Individer och interaktioner framför processer och verktyg.
2. Fungerande programvara framför omfattande dokumentation.
3. Kundsamarbete framför kontraktsförhandling.
4. Anpassning till förändring framför att följa en plan.

Det vill säga, medan det finns värde i punkterna till höger, värdesätter vi punkterna till vänster mer.

Principerna bakom det agila manifestet

Vi följer dessa 12 principer:

  1. Vår högsta prioritet är att tillfredsställa beställarens önskemål genom tidig och kontinuerlig leverans av värdefull programvara.
  2. Välkomna förändrade krav, även sent under utvecklingen. Agila metoder utnyttjar förändring till kundens konkurrensfördel.
  3. Leverera fungerande programvara ofta, med ett par veckors till ett par månaders mellanrum, ju oftare desto bättre.
  4. Verksamhetskunniga och utvecklare måste arbeta tillsammans dagligen under hela projektet.
  5. Bygg projekt kring motiverade individer. Ge dem den miljö och det stöd de behöver, och lita på att de får jobbet gjort.
  6. Kommunikation ansikte mot ansikte är det bästa och effektivaste sättet att förmedla information, både till och inom utvecklingsteamet.
  7. Fungerande programvara är främsta måttet på framsteg.
  8. Agila metoder verkar för uthållighet. Sponsorer, utvecklare och användare skall kunna hålla jämn utvecklingstakt under obegränsad tid.
  9. Kontinuerlig uppmärksamhet på förstklassig teknik och bra design stärker anpassningsförmågan.
  10. Enkelhet – konsten att maximera mängden arbete som inte görs – är grundläggande.
  11. Bäst arkitektur, krav och design växer fram med självorganiserande team.
  12. Med jämna mellanrum reflekterar teamet över hur det kan bli mer effektivt och justerar sitt beteende därefter.

Läs gärna mer om manifestet med tydliga förklaringar av varje princip i denna artikel på engelska.