Are you doing MDA (Model Driven Architecture) right now? If so, what tools do you use, and how is it working out?

↘锁芯ラ 提交于 2019-12-31 14:40:54

问题


Model Driven Architecture is the idea that you create models which express the problem you need to solve in a way that is free of any (or at least most) implementation technologies, and then you generate implementation for one or more specific platforms. The claim is that working at a higher level of abstraction is far more powerful and productive. In addition, your models outlive technologies (so you still have something when your first language / platform becomes obsolete that you can use for your next generation solution). Another key claimed benefit is that much of the boilerplate and "grunt work" can be generated. Once the computer understands the semantics of your situation, it can help you more.

Some claim this approach is 10 times more productive, and that it is the way we will all be building software in 10 years.

However, this is all just theory. I am wondering what the outcomes are when the rubber meets the road. Also, the "official" version of MDA is from the OMG, and seems very heavy. It is heavily based on UML, which might be considered good or bad depending on who you ask (I'm leaning towards "bad").

But, in spite of those concerns, it is hard to argue with the idea of working at a higher level of abstraction and "teaching" the computer to understand the semantics of your problem and solution. Imagine a series of ER models which simply express truth, and then imagine using those to generate a significant portion of your solution, first in one set of technologies and then again in another set of technologies.

So, I'd love to hear from people who really are doing MDA right now ("official" or not). What tools are you using? How is it working out? How much of the theoretical promise have you been able to capture? Do you see a true 10X effectiveness increase?


回答1:


I tried it once. Roughly about halfway through the project I realized that my models were hopelessly out of date from my code and were so complex that keeping them up to date was prohibitive and slowing me down.

The problem is that Software is full of edge cases. Models are great at capturing the larger picture but once you start to actually code the implementation you keep finding all those edge cases and before too long you notice that the model is far too granular and you have to make a choice between maintaining the model or getting some code written. Maybe the boilerplate generation is a benefit for starting up but after that the benefits quickly vanish and I found that I got a drastic drop in productivity. The models eventually disappeared from that project.




回答2:


The lack of response to this question is somewhat ominous... maybe I'll let Dijkstra field it.

... Because computers appeared in a decade when faith in the progress and wholesomeness of science and technology was virtually unlimited, it might be wise to recall that, in view of its original objectives, mankind's scientific endeavours over, say, the last five centuries have been a spectacular failure.

As you all remember, the first and foremost objective was the development of the Elixir that would give the one that drank it Eternal Youth. But since there is not much point in eternal poverty, the world of science quickly embarked on its second project, viz. the Philosopher's Stone that would enable you to make as much Gold as you needed.

...

The quest for the ideal programming language and the ideal man-machine interface that would make the software crisis melt like snow in the sun had —and still has!— all the characteristics of the search for the Elixir and the Stone. This search receives strong support from two sides, firstly from the fact that the working of miracles is the very least that you can expect from computers, and secondly from the financial and political backing from a society that had always asked for the Elixir and the Stone in the first place.

Two major streams can be distinguished, the quest for the Stone and the quest for the Elixir.

The quest for the Stone is based on the assumption that our "programming tools" are too weak. One example is the belief that current programming languages lack the "features" we need. PL/I was one of the more spectacular would-be stones produced. I still remember the advertisement in Datamation,1968, in which a smiling Susie Mayer announces in full colour that she has solved all her programming problems by switching to PL/I. It was only too foreseeable that, a few years later, poor Susie Mayer would smile no longer. Needless to say, the quest went on and in due time a next would-be stone was produced in the form of Ada (behind the Iron Curtain perceptively referred to as PL/II). Even the most elementary astrology for beginners suffices to predict that Ada will not be the last stone of this type.

...

Another series of stones in the form of "programming tools" is produced under the banner of "software engineering", which, as time went by, has sought to replace intellectual discipline by management discipline to the extent that it has now accepted as its charter "How to program if you cannot."




回答3:


I am doing my own independent research in the Model-Driven Software Development area since 1999. I've finally developed a generic modeling methodology in 2006 that I labeled ABSE (Atom-Based Software Engineering).

So, ABSE builds up on two fundamental aspects:

  • Programming is about problem decomposition
  • Everything can be represented on a tree

Some ABSE features:

  • It can support all other forms of software engineering, from the traditional file-oriented methods up to Component-Based Development, Aspect-Oriented Programming, Domain-Specific Modeling, Software Product Lines and Software Factories.

  • It is generic enough to be applied to enterprise software, embedded, games, avionics, internet, any domain in fact.

  • You don't need to be a rocket scientist to use if effectively. ABSE is accessible to the "mere developer mortal". There's no complexity like the one found in oAW/MDA/XMI/GMF/etc tool chains.

  • Its meta-metamodel is designed to support 100% code generation from the model. No round-trip necessary. The custom/generated code mix is directly supported by the metamodel.

  • The model can be concurrently manipulated. Workflows and version control can be applied (tool support needed).

It may sound like it's on the utopic side, but actually I left the research phase and I am now in the implementation phase of an IDE that puts all the above into practice. I think I'll have a basic prototype ready in a few weeks (around end of April). The IDE (named AtomWeaver) is being built through ABSE, so AtomWeaver will be the first proof-of-concept of the ABSE methodology.

So, this is not MDA (thankfully!), but at least is a very manageable approach. As the inventor of ABSE, I am understandably excited about it, but I am sure Model-Driven Software Development will get a boost in 2009!

Stay tuned...




回答4:


Model-Driven Software Development is still a niche area but there are published case studies and a growing body of other literature showing success over hand-coded methods.

The OMG's MDA is just one approach, other people are showing success using Domain-Specific Languages (that don't use UML for modelling).

The key is to generate code from the models and to update your generator if it's not generating what you want - not to modify your code. Specialist tooling to help you do this has been around for years now but interest in this approach has grown over the last five years or so due to Microsoft's move into this area and through open-source projects like openArchitectureWare in the Eclipse world.

I run a couple of sites: www.modeldrivensoftware.net and www.codegeneration.net where you can get more discussion, interviews, articles and tooling options on these topics.




回答5:


I started working with model-driven technologies and DSLs in 1997, and I am more and more enthusiastic about MDE.

I can testify that reaching the 10-times-more productivity (and perhaps even more ;-) is possible under certain circumstances. I have implemented many model-driven software factories that were able to generate executable software with very simple models, from the persistency layer to the UI layer, associated to their generated technical documentation.

But I don't follow the MDA standard for several reasons. The MDA promise is to express your software in a PIM model, and have the capacity to transform it automatically into one or several technical stacks (PSMs).

But :

  • who needs to target several technical stacks in the real life ? who needs to target one single and well-defined architecture ?
  • the magic of MDA stands in the PIM->PSM transformation, but model2model transformation in an iterative and incremental way is tough :
    • model2model is much more complicated than model2text to implement, debug, maintain.
    • as it is rarely possible to generate 100% of a software, details have to be added to the resulting PSM model, and preserved transformation after transformation. That means a merge operation (3-way, to remember the added details). And when dealing with models, merging graph of objects is far more complicated that textual merging (that works pretty well).
    • you have to deal with a PSM model (that is to say a model that looks very close to your final generated source code). It is interesting for the tool vendor, since ready-to-use PSM profiles and associated code generators can be selled and shipped with MDA tool.

I advocate for MDE strategies where the PIM is a DSL that speaks about your logical architecture (independently of any technical stack), and generate the code from this PIM with a custom and specific code generator.

Pros :

  • you don't have to deal with a complex and technical PSM model. You have your code instead.
  • using DSL techniques, the PIM is more efficient, sustainable, expressive and easy to interpret by code and document generators. Models keep simple and precise.
  • it makes the obligation to define your architectural requirements and concepts very early (since it is your PIM metamodel), independently of any technical stack. Usually, it is about identifying various type of data, service, ui components, with their definition, capabilities, and features (attributes, links to other concepts; ...).
  • the generated code fit your needs, since it is custom. And you can keep it even more simple if your generated code extends some extra manually maintained framework classes.
  • you capitalize knowledge in several orthogonal ways :
    • models capitalize the functionalities / business
    • code generators capitalize the technical mapping decisions from your logic architectural components to a particular technical stack.
    • the PIM DSL capitalize a definition of your logical architecture
  • with the logical-architecture-oriented PIM, it is possible to generate all the technical code and other non-code files (configs, properties, ...). Developers can focus on the implementation of business functionalities that could not be fully expressed with the model, and usually don't have to deal with the technical stack anymore.
  • merges operations are all about flat source code files, and this works pretty well.
  • you still can define several code generators if you target several technical stacks.

Cons :

  • you have to implement and maintain your own specific code and document generators
  • generally speaking, to take the best of the DSL approach, you have to invest into specific tooling (model validation, specific wizards, dialogs, menus, import/export...).
  • when updating/improving your DSL, you sometimes need to migrate your models. Usually this can be done with some disposable migration code, or manually (depending on the impact).
  • all of these cons require a specific developer team with model-driven skills

This particular approach can be implemented on top of an extensible UML modeler with UML profiles, or with specific model editors (textual or graphical).

The big difference between MDA and MDE could be summarized as :

  • MDA is a set of General Purpose tooling and languages, providing off-the-shelf md profiles and tooling for everyone needs. This is perfect for tool vendors, but I suspect that everyone needs and contexts are different.
  • With MDE + specific DSL and tooling, you need some supplementary skilled model-driven developers that will maintain your custom software factory (modeler, modeler extensions, generators...), but you capitalize everywhere and manage very simple-precise-sustainable models.

There is a kind of conflict of interest between the two approaches. One advocates to reuse off-the-shelf precapitalized model-driven components, and in the other, you make your own capitalization with defining DSLs and associated tooling.




回答6:


We do use MDA and EMF as tools. It saves us a lot of man-hours through code generation instead of manual coding. It does require high qualification of analytics, but it is what IT is about. So we mainly concentrated on problems itself and also tools/frameworks which do code generation and run-time support of the generated code. Finally, I can confirm that we do have 10x productivity increase with MDA.



来源:https://stackoverflow.com/questions/696021/are-you-doing-mda-model-driven-architecture-right-now-if-so-what-tools-do-yo

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!