a good start: unix k.i.s.s

The Unix philosophy of K.I.S.S – simple and beautiful software that “just works”

one’s philosophy:

because humans are not perfect – their products are not perfect.

nobody is god – meaning – nobody is perfect – nobody’s work is without error – capitalism, socialism, dictatorship, democracy… all man-made systems have errors.

So you can assume this article is also not perfect and i encourage you to contribute to bring it near the 99% perfection, that is possible.

Nobody is perfect – but who would like to be a nobody?

Everybody wants to be somebody to someone.

“Loneliness is almost the opposite of happiness” (Manfred Spitzer) – so one defines oneself over relationships.

Being lonely is worse than smoking or fatigue!

So that’s why everyone would like to be a famous rock star – politician or actor.

To have meaningful relationships with the rest of the world

Jim Carrey: “hope, that every single one can become rich and then have everything one ever wanted, so that one realizes, that this is not the answer.”

What is the difference between man and the animals?

Considering that 99% of the genome of pigs and humans is identical, it’s the

software

that is running on their hardware (brains) that makes us ask:

Why?

Also when it comes to software development… it’s boring being “alone” on a problem. it’s always more fun having a sort of competition who solves the problem first 😉 (unless one guy keeps winning all the time…)

Speaking of software – because humans are not perfect – their products are not perfect.

But there are

methods to error-correct-yourself

so that your product does not waste your customer’s valuable life-time (as bill gates and the Marketing-Guru (successor-CEO of Apple after Steve Job’s death (everybody misses your innovation massively!) has chosen) and they don’t need to get angry with you (more often than absolutely necessary).

SelfCorrecting Methods like test-documentation (automatically testing (can be really good done with APIs not so easy with GUIs) or manually testing your software if it can do all the demanded functionality) costs you a lot of time – but saves the customer a lot of time – hence: ensures good software quality.

Every function, every button, every option in every possible combination shall be (as automatically) tested after every even minor change (in the src or in the config)

the more modular (UNIX KISS) a program is (every module doing one thing, but doing it good (reliably, easy, fast)) the faster the compile time = the faster the test cycles and fast test-cycles are absolutely crucial for in-time in-budget software development.

so give for software-compilation and testing give ’em fastest and best possible hardware that there is, don’t try to save on money there because this could mean the failure of the whole project. (not-in-time and not-in-budget)

Because bad-quality software steals our most valuable resource: human time.

And that should be a crime.

everyone likes to be a creator

in our deepest hearts and minds we know that we are born to something more than the repetitive, stupid, boring operating of a machine in a factory or office.

we want to be creators.

creators of our own life… of tools that improve our life and the life of our family and friends and everyone on this planet.

simplicity is magic – keep things simple = testable

it is no art to create a technologically mess that not even god understands.

it is way harder to hit the right design point.

also to ensure good software quality: test-ability is key.

meaning: if your software project is hard or takes long time to compile, debug and test (come to the point of error)… it will very likely fail by lack of time and budget.
if one can improve speed of repetative tasks such as compiling and testing even just by milliseconds.
those milliseconds will (over the years) ad up to hours days, weeks, months…
If someonething can be simplified – but it would cost time and money – it is – down the road – most likely worth it.
if you can split it up into modules that can be compile and tested separately. do it.
it is very worth while investing time into making things faster and easier to debug and concepts easier and faster understood.
simplicity is key.
complex things do not scale well and even worse: errors in complex software is hard to track down because you have to execute 1000x steps before you can reproduce the error… slowing down development speed exponentially with complexity.
Be sure: The customers/participants of your very very (too) complex software called “the financial economy of capitalism” will take your system – where you (the politician) never dreamed of.
Making it fail… sooner or later… and hopefully not the species mankind that was clever enough to make itself completely dependent and reliant on the system, fail as a whole.
the ability of a creator to create, is hindered by complexity.
simpler things are easier tested, faster understood, faster learned, faster adopted and also faster adapted.
so the ideal is: if you can make something simpler – do it.
it will boost scalability of your product.
example: Bootstrap v3.0.1 has 2000 Lines of code, while jquery mobile has 13000 Lines of code. What do you think is easier to handel for creators?

the art of software engineering

delivering high quality software in-time and in-budget is the “holy grail” of software engineering.

it is a sport that is still under heavy development, because humans are not god, they make mistakes.

how to properly deal with this not-god property is the search for “the holy grail of software engineering – in-time + in-budget”.

it is still an evolving process where man experiences that nobody is perfect=nobody is god, and everyone has mental limits.

when you engage in programming, no matter what language, there are methods/workflows that have proven to produce good results.

here are my methods for dealing with that:

EXAMPLES + TECHNICAL TESTS

it’s quiet good idea to create a “EXAMPLE” project: in order to test & train the functionalities required.

test: is it working? how good is it working? can the language do it? is it fast/slow/reliable/unreliable?

get some test-data – works? now get massive-amount-of-test data (like 3 billion database records) still works? good!

it proofs that the project stands on solid technical ground and the setup can handle the load.

train: is it working, how i think it is working?

 pdt jquery example projects

THINK ON PAPER

atleast for me, when designing algorithms/programms for a certain problem/subproblem, i always use a blank paper and make a plan with a pencil.

strange enough, i can think much better/logical in this way and better analyse what the solution might be.

PLANS ARE THERE TO BE CHANGED

to have a plan / specification / requirements specification – book – sheed whatever is a good idea… but prepare to change it several times.

the waterfall-model assumes that 1. idea 2. planning 3. implementation 4. test 5. delivery to customers

can be seen as separate phases of the whole software engineering process.

they can not be separated.

why?

because man is not perfect=god, man makes mistakes.

so everything that man produces has errors. (some so minor you won’t ever notice)

but when products become complex, they contain a lot of errors, which can make the whole product unusable = unstable = unreliable.

so during the planning phase → you might want to adapt/change your idea, during the implementation phase → you might want to adapt/change your plans, during the test phase (WORK WITH TEST-DOCUMENTATION!) → you certainly will find errors in your implementation → during the usage by a customer, the customer shurely will take your product to places you have never thought of, creating completely new usecases/problems that are not in your test documentation.

add them to your testdocumentation!

TEST-DOCUMENTATION

this is the way to high quality software. but it is a hard one.

it is crucial to have a document where all possible interactions/cases/usecases/problems that ever occured with the software are marked down.

when you make changes to the code, make shure to test all this possible usecases.

because one fix might brake something else.

there are automatic tests… this works fine for database-functions, but not for gui-user-interactions.

so there is still room for improvement and i hope man manages to fill this room… otherwise i think we are stuck in terms of how-complex and stable at the same time a software can be.

it is said that it takes 10-15 years for a software product to ripe.

this is a very long time.

software development is a long-term investment of money and the much more valuable human-lifetime resource and nerves.

OPEN SOURCE

love open source and open source shall only be made with love and care.

because it is a gift to mankind.

every software problem solved the-open-source-way and in a language that is cross-platform for the next 100 years, can be considered “solved forever”.

it is no use doing the same thing over and over again.

do it once but do it right.

linus shares this view http://www.youtube.com/watch?v=4XpnKHJAok8 “open source is the way to do software right”.

COMPONENTS

linus also encourages to make a lot of small programs and link them together (via “text being the universal interface”, also increasing possible reuse of the component) instead of making one big monolithic program that tries to “do it all” but fails often and miserably (windows).

when you are doing a software project, 9 out of 10 software projects do not “survive” in terms of money and market.

so the odds are pretty good your nice program that you spend so much valuable lifetime on “disappears” from the screen and is lost for mankind.

this is sad sad sad and a waste of resources.

if you can not make your software open source….

… think about what Open Source components your software would need to work.

and implement a lot of Open Source SubProjects/Components for mankind to reuse in making this world a better/safer/nicer place.

instead of chucking it all in the bin, you did something good, even when you fail money-wise.

Kennedy once said: “an error only becomes a mistake – if you refuse to correct it”

====== WATERFALL MODEL ======

… is the idea you could really see planning, implementation, testing as separate phases and complete them separately. (you are perfect, you are god)

Unfortunately, planning errors become visible during implementation, Implementations error become visible during testing etc., etc. i.e. an iterative approach which is not strictly separates these phases is needed.

When you have this iterative approach, nothing is yet “in-time” or “in-budget.”

But at least there is less blaming and destructiveness inside the team “You could have done this… You should have done that”

====== WASSERFALL-MODEL ======

… ist die Vorstellung man könne Planung, Implementierung, Test wirklich als separate Phasen sehen und abschließen. (Du bist perfekt also gott)

Leider werden Planungs-Fehler während der Implementierung sichtbar, Implementations-Fehler werden wärend des Tests sichtbar etc. pp. d.h. ein Iterativer Ansatz welcher nicht strikt zwischen diesen Phasen trennt ist stressfreier.

Deswegen ist noch nix “In-Time” oder “In-Budget”. Aber immerhin mehr Produktivität und weniger Schuldzuweisungen im Konjunktiv “hättest Du, Du solltest doch, wäre das nicht so gelaufen”.

http://www.amazon.de/Software-Engineering-SE-Bibel-Auflage-Pearson/dp/3827372577

the first 80 pages is just a general ramp-on what did go wrong and what can go wrong… (anything that can go wrong).

i hope it’s will guide me on a clear path how to avoid errors during software-planning and later in implementation.

how to build a team

====== SECRET SERVICE M ETHOLOGY OF SOFTWARE DEVELOPMENT ======

maybe we can learn from British secret services how to build proper software:

source: http://www.theguardian.com/world/interactive/2013/jul/31/nsa-xkeyscore-program-full-presentation

xkeyscore

What was it, that made the iphone so successful?

It was the feeling you had a dead-simple, reliable, fast, stylish and high-quality software-hardware-combination and thrill to imagine what you could do with this sort of technology that extends your abilities.

Yeah and of course a little innovation… like two-finger zoom. That was the Innovation by Crazy Steve Jobs-Coolness-Factor. (949 multi-touch patent, sometimes known as the Steve Jobs patent)

Innovation is good and cool… but when it comes to everyday-use and if you would recommend this product to a friends, then you defnately want reliable, simple, fast software: high quality tested software.

Not some buggy beta version of something that gets you frustrated, breaks your wifi with a software-update. (IPhone 4S)

So in the long term a device is only successful if it has high quality software.

Why?

Because if software is dead-simple and high-quality… people adopt it/accept it.

If not. People will complain about your device and look for alternatives.

By the way: Apple has not only become less innovative but also more lazy on it’s software quality… they steer the microsoft way… making everyone run of to android.

Good Software Takes Ten Years.

Source: http://www.joelonsoftware.com/articles/fog0000000017.html

Saturday, July 21, 2001

Have a look at this little chart:

picture-lotus-notes:
[Source: Iris Associates]

This is a chart showing the number of installed seats of the Lotus Notes workgroup software, from the time it was introduced in 1989 through 2000. In fact when Notes 1.0 finally shipped it had been under development for five years. Notice just how dang long it took before Notes was really good enough that people started buying it. Indeed, from the first line of code written in 1984 until the hockey-stick part of the curve where things really started to turn up, about 11 years passed. During this time Ray Ozzie and his crew weren’t drinking piña coladas in St Barts. They were writing code.

The reason I’m telling you this story is that it’s not unusual for a serious software application. The Oracle RDBMS has been around for 22 years now. Windows NT development started 12 years ago. Microsoft Word is positively long in the tooth; I remember seeing Word 1.0 for DOS in high school (that dates me, doesn’t it? It was 1983.)

To experienced software people, none of this is very surprising. You write the first version of your product, a few people use it, they might like it, but there are too many obvious missing features, performance problems, whatever, so a year later, you’ve got version 2.0. Everybody argues about which features are going to go into 2.0, 3.0, 4.0, because there are so many important things to do. I remember from the Excel days how many things we had that we just had to do. Pivot Tables. 3-D spreadsheets. VBA. Data access. When you finally shipped a new version to the waiting public, people fell all over themselves to buy it. Remember Windows 3.1? And it positively, absolutely needed long file names, it needed memory protection, it needed plug and play, it needed a zillion important things that we can’t imagine living without, but there was no time, so those features had to wait for Windows 95.

But that’s just the first ten years. After that, nobody can think of a single feature that they really need. Is there anything you need that Excel 2000 or Windows 2000 doesn’t already do? With all due respect to my friends on the Office team, I can’t help but feel that there hasn’t been a useful new feature in Office since about 1995. Many of the so-called “features” added since then, like the reviled ex-paperclip and auto-document-mangling, are just annoyances and O’Reilly is doing a nice business selling books telling you how to turn them off.

So, it takes a long time to write a good program, but when it’s done, it’s done. Oh sure, you can crank out a new version every year or two, trying to get the upgrade revenues, but eventually people will ask: “why fix what ain’t broken?”

picture-fruit:

Failure to understand the ten-year rule leads to crucial business mistakes.

Mistake number 1. The Get Big Fast syndrome. This fallacy of the Internet bubble has already been thoroughly discredited elsewhere, so I won’t flog it too much. But an important observation is that the bubble companies that were trying to create software (as opposed to pet food shops) just didn’t have enough time for their software to get good. My favorite example is desktop.com, which had the beginnings of something that would have been great if they had worked on it for 10 years. But the build-to-flip mentality, the huge overstaffing and overspending of the company, and the need to raise VC every ten minutes made it impossible to develop the software over 10 years. And the 1.0 version, like everything, was really morbidly awful, and nobody could imagine using it. But desktop.com 8.0 might have been seriously cool. We’ll never know.

Mistake number 2. the Overhype syndrome. When you release 1.0, you might want to actually keep it kind of quiet. Let the early adopters find it. If you market it and promote it too heavily, when people see what you’ve actually done, they will be underwhelmed. Desktop.com is an example of this, so is Marimba, and Groove: they had so much hype on day one that people stopped in and actually looked at their 1.0 release, trying to see what all the excitement was about, but like most 1.0 products, it was about as exciting as watching grass dry. So now there are a million people running around who haven’t looked at Marimba since 1996, and who think it’s still a dorky list box that downloads Java applets that was thrown together in about 4 months.

Keeping 1.0 quiet means you have to be able to break even with fewer sales. And that means you need lower costs, which means fewer employees, which, in the early days of software development, is actually a really great idea, because if you can only afford 1 programmer at the beginning, the architecture is likely to be reasonably consistent and intelligent, instead of a big mishmash with dozens of conflicting ideas from hundreds of programmers that needs to be rewritten from scratch (like Netscape, according to the defenders of the decision to throw away all the source code and start over).

Mistake number 3. Believing in Internet Time. Around 1996, the New York Times first noticed that new Netscape web browser releases were coming out every six months or so, much faster than the usual 2 year upgrade cycle people were used to from companies like Microsoft. This led to the myth that there was something called “Internet time” in which “business moved faster.” Which would be nice, but it wasn’t true. Software was not getting created any faster, it was just getting released more often. And in the early stages of a new software product, there are so many important things to add that you can do releases every six months and still add a bunch of great features that people Gotta Have. So you do it. But you’re not writing software any faster than you did before. (I will give the Internet Explorer team credit. With IE versions 3.0 and 4.0 they probably created software about ten times faster than the industry norm. This had nothing to do with the Internet and everything to do with the fact that they had a fantastic, war-hardened team that benefited from 15 years of collective experience creating commercial software at Microsoft.)

Mistake number 4. Running out of upgrade revenues when your software is done. A bit of industry lore: in the early days (late 1980s), the PC industry was growing so fast that almost all software was sold to first time users. Microsoft generally charged about $30 for an upgrade to their $500 software packages until somebody noticed that the growth from new users was running out, and too many copies were being bought as upgrades to justify the low price. Which got us to where we are today, with upgrades generally costing 50%-60% of the price of the full version and making up the majority of the sales. Now the trouble comes when you can’t think of any new features, so you put in the paperclip, and then you take out the paperclip, and you try to charge people both times, and they aren’t falling for it. That’s when you start to wish that you had charged people for one year licenses, so you can make your product a subscription and have permission to keep taking their money even when you haven’t added any new features. It’s a neat accounting trick: if you sell a software package for $100, Wall Street will value that at $100. But if you can sell a one year license for $30, then you can claim that you’re going to get recurring revenue of $30 for the next, say, 10 years, which is worth $200 to Wall Street. Tada! Stock price doubles! (Incidentally, that’s how SAS charges for their software. They get something like 97% renewals every year.)

The trouble is that with packaged software like Microsoft’s, customers won’t fall for it. Microsoft has been trying to get their customers to accept subscription-based software since the early 90′s, and they get massive pushback from their customers every single time. Once people got used to the idea that you “own” the software that you bought, and you don’t have to upgrade if you don’t want the new features, that can be a big problem for the software company which is trying to sell a product that is already feature complete.

Mistake number 5. The “We’ll Ship It When It’s Ready” syndrome. Which reminds me. What the hell is going on with Mozilla? I made fun of them more than a year ago because three years had passed and the damn thing was still not out the door. There’s a frequently-obsolete chart on their web site which purports to show that they now think they will ship in Q4 2001. Since they don’t actually have anything like a schedule based on estimates, I’m not sure why they think this. Ah, such is the state of software development in Internet Time Land.

But I’m getting off topic. Yes, software takes 10 years to write, and no, there is no possible way a business can survive if you don’t ship anything for 10 years. By the time you discount that revenue stream from 10 years in the future to today, you get bupkis, especially since business analysts like to pretend that everything past 5 years is just “residual value” when they make their fabricated, fictitious spreadsheets that convince them that investing in sock puppets at a $100,000,000 valuation is a pretty good idea.

Anyway, getting good software over the course of 10 years assumes that for at least 8 of those years, you’re getting good feedback from your customers, and good innovations from your competitors that you can copy, and good ideas from all the people that come to work for you because they believe that your version 1.0 is promising. You have to release early, incomplete versions — but don’t overhype them or advertise them on the Super Bowl, because they’re just not that good, no matter how smart you are.

Mistake number 6. Too-frequent upgrades (a.k.a. the CorelSyndrome). At the beginning, when you’re adding new features and you don’t have a lot of existing customers, you’ll be able to release a new version every 6 months or so, and people will love you for the new features. After four or five releases like that, you have to slow down, or your existing customers will stop upgrading. They’ll skip releases because they don’t want the pain or expense of upgrading. Once they skip a release, they’ll start to convince themselves that, hey, they don’t always need the latest and greatest. I used Corel PhotoPaint 6.0 for 5 years. Yes, I know, it had all kinds of off-by-one bugs, but I knew all the off-by-one bugs and compensated by always dragging the selection one pixel to the right of where I thought it should be.

picture-roosevelt:

Make a ten year plan. Make sure you can survive for 10 years, because the software products that bring in a billion dollars a year all took that long. Don’t get too hung up on your version 1 and don’t think, for a minute, that you have any hope of reaching large markets with your first version. Good software, like wine, takes time.

 

This is said to be the industry’s default bibel of software engineering… well… it’s a lot of paper.

Software Engineering – Die SE-Bibel für Lehre und Praxis

Die Deutsche Übersetzung ist leider voller Fehler. Besser die Englische nehmen?

http://www.amazon.de/Software-Engineering-aktualisierte-Auflage-Pearson/dp/3868940995/ref=pd_cp_b_0

Produkt-Information

Ian Sommerville (Autor)

4.0 von 5 Sternen  Alle Rezensionen anzeigen (5 KundenrezensionenLike(1)

about the author:

http://www.software-engin.com/

http://iansommerville.com/techstuff/ “It’s going to be hard to build systems for digital government”

From the content: https://en.wikipedia.org/wiki/Formal_specification

Formal specification

From Wikipedia, the free encyclopedia
Jump to: navigationsearch

In computer scienceformal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools.[1][2] These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.[3]

Motivation

In each passing decade computer systems have become increasingly more powerful and as a result they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this in software enginering reliability as once predicted. Other methods such as testing are more commonly used to enhance code quality.[1]

Testing finds errors (or bugs) in the implementation. It is best to find these as early as possible because the farther along in a project a bug is found, the more costly it is to fix. The idea with formal specifications is to minimize the creation of such errors. This is done by reducing the ambiguity of informal system requirements. By creating a formal specification, the designers are forced to make a detailed system analysis early on in the project. This analysis will usually reveal errors or inconsistencies that exist in the informal system requirements.[4] As a result the chance of subtle errors being introduced and going undetected in complex software systems is reduced.[1] Finding and correcting these kinds of errors early in the design stage will help to prevent expensive fixes that may arise in the future.

Testing and QA contribute to more than 50% of the total development cost of some projects; through the use of formal specifications certain testing processes may be automated leading to better and more cost-effective testing.[1]

Uses

Given such a specification, it is possible to use formal verification techniques to demonstrate that a system design is correct with respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use provably correct refinement steps to transform a specification into a design, which is ultimately transformed into an implementation that is correct by construction.

It is important to note that a formal specification is not an implementation, but rather it may be used to develop an implementation. Formal specifications describe what a system should do, not how the system should do it.

A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal [3]

A good specification will have:[3]

  • Constructability, manageability and evolvability
  • Usability
  • Communicability
  • Powerful and efficient analysis

One of the main reasons there is interest in formal specifications is that they will provide an ability to perform proofs on software implementations.[2] These proofs may be used to validate a specification, verify correctness of design, or to prove that a program satisfies a specification.[2]

Limitations

A design (or implementation) cannot ever be declared “correct” on its own. It can only ever be “correct with respect to a given specification”. Whether the formal specification correctly describes the problem to be solved is a separate issue. It is also a difficult issue to address, since it ultimately concerns the problem constructing abstracted formal representations of an informal concrete problem domain, and such an abstraction step is not amenable to formal proof. However, it is possible to validate a specification by proving “challenge” theorems concerning properties that the specification is expected to exhibit. If correct, these theorems reinforce the specifier’s understanding of the specification and its relationship with the underlying problem domain. If not, the specification probably needs to be changed to better reflect the domain understanding of those involved with producing (and implementing) the specification.

Formal methods of software development are not widely used in industry. Most companies do not consider it cost-effective to apply them in their software development processes.[4] This may be for a variety of reasons, some of which are:

  • Time
    • High initial start up cost with low measurable returns
  • Flexibility
    • A lot of software companies use agile methodologies that focus on flexibility. Doing a formal specification of the whole system up front is often perceived as being the opposite of flexible. However, there is some research into the benefits of using formal specifications with “agile” development[5]
  • Complexity
    • They require a high level of mathematical expertise and the analytical skills to understand and apply them effectively[5]
    • A solution to this would be to develop tools and models that allow for these techniques to be implemented but hide the underlying mathematics[2][5]
  • Limited scope [3]
    • They do not capture properties of interest for all stakeholders in the project[3]
    • They do not do a good job of specifying user interfaces and user interaction [4]
  • Not cost-effective
    • This is not entirely true, by limiting their use to only core parts of critical systems they have shown to be cost-effective[4]

Other limitations:[3]

  • Isolation
  • Low-level ontologies
  • Poor guidance
  • Poor separation of concerns
  • Poor tool feedback

Paradigms

Formal specification techniques have existed in various domains and on various scales for quite some time.[6] Implementations of formal specifications will differ depending on what kind of system they are attempting to model, how they are applied and at what point in the software life cycle they have been introduced.[2] These types of models can be categorized into the following specification paradigms:

  • History-based specification [3]
    • behavior based system histories
    • assertions are interpreted over time
  • State-based Specification [3]
    • behavior based on system states
    • series of sequential steps, (e.g. a financial transaction)
    • languages such as Z, VDM or B rely on this paradigm [3]
  • Transition-based specification [3]
    • behavior based on transitions from state-to-state of the system
    • best used with a reactive system
    • languages such as Statecharts, PROMELA, STeP-SPL, RSML or SCR rely on this paradigm [3]
  • Functional specification [3]
    • specify a system as a structure of mathematical functions
    • OBJ, ASL, PLUSS, LARCH, HOL or PVS rely on this paradigm [3]
  • Operational Specification [3]
    • early languages such as Paisley, GIST, Petri nets or process algebras rely on this paradigm [3]

In addition to the above paradigms there are ways to apply certain heuristics to help improve the creation of these specifications. The paper referenced here best discusses heuristics to use when designing a specification.[6] They do so by applying a divide-and-conquer approach.

Software tools

The Z notation is an example of a leading formal specification language. Others include the Specification Language(VDM-SL) of the Vienna Development Method and the Abstract Machine Notation (AMN) of the B-Method. In the Web services area, formal specification is often used to describe non-functional properties [7] (Web services Quality of Service).

Some tools are:[4]

Examples

For implementation examples, refer to the links in Software Tools section.

See also

References

  1. a b c d Hierons, R. M.; Krause, P.; Lüttgen, G.; Simons, A. J. H.; Vilkomir, S.; Woodward, M. R.; Zedan, H.; Bogdanov, K.; Bowen, J. P.; Cleaveland, R.; Derrick, J.; Dick, J.; Gheorghe, M.; Harman, M.; Kapoor, K. (2009). “Using formal specifications to support testing”. ACM Computing Surveys 41 (2): 1. doi:10.1145/1459352.1459354edit
  2. a b c d e Gaudel, M. -C. (1994). “Formal specification techniques”. Proceedings of 16th International Conference on Software Engineering. pp. 223–223. doi:10.1109/ICSE.1994.296781ISBN 0-8186-5855-Xedit
  3. a b c d e f g h i j k l m n o Lamsweerde, A. V. (2000). “Formal specification”. Proceedings of the conference on the future of Software engineering – ICSE ’00. p. 147. doi:10.1145/336512.336546ISBN 1581132530edit
  4. a b c d e Sommerville, Ian (2009). “Formal Specification”Software Engineering. Retrieved 3 February 2013.
  5. a b c Nummenmaa, Timo; Tiensuu, Aleksi; Berki, Eleni; Mikkonen, Tommi; Kuittinen, Jussi; Kultima, Annakaisa (4 August 2011). “Supporting agile development by facilitating natural user interaction with executable formal specifications”. ACM SIGSOFT Software Engineering Notes 36 (4): 1–10. doi:10.1145/1988997.2003643.
  6. a b van der Poll, John A.; Paula Kotze (2002). “What design heuristics may enhance the utility of a formal specification?”Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology. SAICSIT ’02: 179–194.
  7. ^ S-Cube Knowledge Model: Formal Specification

External links

because of you bravery of reading all of this …

 

also interesting:

other users’s philosophy:

The GMPG was founded on the following principles:

  • Scientific

    • Simplicity

    • Interoperability

        • Implementations of protocols should be encouraged to interoperate.

        • Thus GMPG has chosen to use the (cc) nd license restriction for its protocols and formats to reduce mutability into non-interoperable forms.

  • Social

  • Political

    • Unanimity

      • Similar to the W3C’s principle of “consensus”, the GMPG’s designs/decisions are made by unanimity among the Founders.

      • New Assembly

      • The GMPG believes there are many opportunities to launch new efforts, and thus encourages others to do so as well.

      • Feel free to use this set of principles as a starting point, it is licensed under the Creative Commons (by) License which of course allows for derivative works.

  • Economic

    • Profit and prosper

      • Enable people to build and sell products, without obligating them to divulge their intellectual property.

      • Thus GMPG does not include the (cc) nc license restriction on any of its efforts, nor does it contain any so-called “viral” provisions. May you profit and prosper.

Inspirations and sources

Here are a few of the inspirations and sources for many of these principles.

Note that many of these sources contain many other principles, some of which were perhaps not important enough, and some of which could even be considered counter-principles.

It is left as an exercise to the reader to determine which are which.

In no particular order:

Creative Commons License This web page is licensed under a Creative Commons License.

src: https://gmpg.org/principles

GMPG (what does this stand for?)

The links in Joe’s blogroll would look something like this:

<a href="http://jane-blog.example.org/" rel="sweetheart date met">Jane</a>
<a href="http://dave-blog.example.org/" rel="friend met">Dave</a>
<a href="http://darryl-blog.example.org/" rel="friend met">Darryl</a>
<a href="http://www.metafilter.com/">MetaFilter</a>
<a href="http://james-blog.example.com/" rel="met">James Expert</a>

src: https://gmpg.org/xfn/intro

Rant: Open Source and the concept of: Release early, release often or publish early & publish often -> continuous development/continuous integration (CD/CI) -> tight loops ok but still – linking to nirvana without redirection & badly written software that everyone uses – another case of – nothing works “ok” – klarer fall von “nichts funktioniert ok”