Quantcast
Channel: The PLM Dojo
Viewing all 28 articles
Browse latest View live

How to Use Workflow Template Filters

$
0
0

Everyone says they want more choices. But the truth is that more choices just confuse people, particularly when there is only a single right choice. This is true with workflow templates. Your users don’t want to select from a huge list workflow templates every time they want to start a new process. It’s annoying and can lead to mistakes which you have to fix.

Fortunately you can limit their choices and make both your users and your own lives easier.

Overview

There are two basic steps:

  1. Create workflow template filters.
  2. Remove the ability for users to access any workflows except those which have been assigned.

Create Workflow Template Filters

Workflow template filters define which workflows are available for a particular object type, within each group. When a user selects an object and starts a new process the dialog will only show the assigned workflows.

To create a workflow template filter you

  1. Select a group
  2. Select an object type
  3. Specify a list of workflow templates

You can do this through the GUI in the Workflow Designer application. Go to Edit → Template Filter. That works fine for setting up one or two filters. But, it can be cumbersome when you have several combinations of group and object type to configure.

That is why I prefer to create filters by setting Teamcenter preferences. That is what the GUI in Workflow Designer is actually doing.

The three steps mentioned above become:

  1. Select a group: Create a group level preference
  2. Select object type: Name the preference TC_objectType_release_procedure. objectType is the type of object the filter is applied to. Often this will be Item or ItemRevision or one of their descendants. The preference type should be string with multiple values.
  3. Specify a list of workflow templates: Add the names of the workflows as values for the preference you just created.

Hide Unassigned Templates From Ordinary Users

Change the value of the site preference CR_allow_alternate_procedures to none.

The valid values are all, Assigned or none. For all or Assigned the New Process dialog will have two radio buttons which allows users to view either all templates or just those that are assigned. The difference between all and Assigned is which button is pre-selected.

New Process Dialog with CR_allow_alternate_procedures=Assigned defaults to the list of defined workflow template filters

When you change the value to none you remove the radio buttons completely. Users can only select from the list of assigned workflow templates.

New Process Dialog with CR_allow_alternate_procedures=none allow allows users to select from the list of defined workfow process filters

Give Certain Users Special Privileges

Some users, such as DBAs and workflow designers, should be able to see all of the workflows.

Create group or user CR_allow_alternate_procedures preferences set to all or Assigned for those groups or users to override the site preference.

Export → Edit → Import

If you have several similar sets of template filters to configure:

  1. Define the preference for one group and object type
  2. Use the using the preferences_manager utility to export the preference
  3. Edit the XML if necessary to add preferences for additional object types or to list additional workflows
  4. Import the XML with preferences_manager for each group

Caveats

Template filters are not foolproof. If a user selects multiple objects of different types then they will see the list of assigned templates for only one of the object types. They will be able to submit all of the objects to any process template in that list. To absolutely ensure that only specific types are submitted to a workflow you need to add a rule handler for that to the process template.

Also be aware that users could define their own values for CR_allow_alternate_procedures, overriding the values you have set at the site or group level.

Credit Where it is Due

Logresh Rasumani did a post a while ago on his blog about setting up workflow filters using the Workflow Designer application. That pushed me to write this post about the preferences.

The post How to Use Workflow Template Filters appeared first on The PLM Dojo.


How to Automate NX Migrations with Part Type Convert

$
0
0

When you import NX parts and drawings into Teamcenter there are two main challenges. The first is selecting the right item ID. I previously covered how to automate that by implementing a custom clone-autotranslate function. The second is selecting the right item type. Today I’ll show how to automate that by writing a custom Convert callback.

NX Open’s Clone API

NX’s Clone API, UF_CLONE_*, is the most customizable NX API I know. It gives you many opportunities to customize how you import NX part files. One such opportunity is UF_CLONE_register_cvt_callback(). It registers “convert” callback that is used by NX to answer questions like what item type to use.

Cloning assemblies or single files is a process of duplicating an existing assembly. It can be used to create similar assemblies on the native file system or within Teamcenter. It can also create a duplicate assembly where the original is on a native file system and the duplicate is in Teamcenter. This is how NX data is imported into Teamcenter and why we use the UF_CLONE API.

To automate the choice of item type we will implement our own convert callback that will look at each part’s filename and determine the item type.

Implementing the Convert Callback

First, we write a function using the following template:

extern "C" DllExport // so we can call this from an external program
UF_CLONE_convert_response_t dojo_cvt_item_type(
	UF_CLONE_convert_cb_t reason, 
	const char* part_spec, 
	char **answer)
{	
	// The magic happens here: 
	// * Parse part_spec (may include full path)
	// * Determine item item type 
	// ** infer from part number or directory
	// ** consult external database
	// ** etc.)
 
	// * Allocate enough space in *answer for 
	//   the correct item type and the line terminator.
	// * set the value of *answer to the item type chosen
 
	// The return code tells NX whether to use your answer or not
	// (you could tell it to try another callback).
	// UF_CLONE_use_supplied --> use this result
	return UF_CLONE_use_supplied; 
}

The argument reason tells you what sort of information NX is looking for. For our example the value will always be UF_CLONE_part_type_convert because that is the code we will use when we register the callback.

Registering the Part Type Convert Callback

Now you have to tell NX to use your custom callback function. It’s the same process as registering an auto-translate callback. For our example I’ll register the convert callback within NX startup User Exit, ufsta().

extern "C" DllExport 
void ufsta( char *param, int *returnCode, int rlen )
{
	UF_initialize();
 
 	// Tell NX to use our convert callback:
	UF_CLONE_register_cvt_callback(
 
		// Use it for setting item type
		UF_CLONE_part_type_convert, 
 
		// Our callback from above
		dojo_cvt_item_type, 
 
		// An identifying  name
		"Dojo's Set Item Type",
 
		// description
		"Infers item type from part number", 
 
		// These two arguments together tell NX to call 
		// our callback first,
		// before any other registered callbacks.
		NULL, 
		true);
 
	// You can register other callbacks here too, such as the 
	// clone-autotranslate callback we covered previously</a>
	UF_UGMGR_set_clone_auto_trans(dojo_autotranslate);
 
	UF_terminate();
}

Build this into shared library (DLL) and place it in where NX can find it when it launches. Now the import process will use your convert callback when an item type was not manually set.

Testing The Callback

Our convert callback is a simple C function. By declaring it as extern "C" DllExport we make it available for calling from an external program. I like writing unit-testing programs in Python using Python’s ctypes library.

Other Uses for Convert Callbacks

The value UF_CLONE_part_type_convert we passed to UF_CLONE_register_cvt_callback() configured our callback for use when NX needed to know the item type. Just so you know, there are other types of convert callbacks you can implement:

  • UF_CLONE_user_name_convert: called when a part has USER_NAME naming
  • UF_CLONE_part_type_convert: called when a part needs a type for the PDM system
  • UF_CLONE_part_name_convert: called when a part needs a PDM name
  • UF_CLONE_part_desc_convert: called when a part needs a PDM description
  • UF_CLONE_part_own_user_convert: called when a part needs a PDM owner user
  • UF_CLONE_part_own_group_convert: called when a part needs a PDM owner group
  • UF_CLONE_part_checkout_convert: called when a part needs a checkout comment
  • UF_CLONE_assoc_file_dir_convert: called when a part needs an associated file directory

The post How to Automate NX Migrations with Part Type Convert appeared first on The PLM Dojo.

How to Remove Status with a Workflow Process

$
0
0

Sometimes you need to remove status, or unstatus, something in Teamcenter.

Typically removing status from objects would be an administrator task to repair some sort of mistake. But you could have a standard workflow where objects go through state changes from statused to unstatused and back again.

Here’s how to do it.

release_man utility

One option is the release_man command line utility. However this can only be run by a DBA user and doesn’t leave any records behind. You can’t tell who used it to remove status, and you can’t build it into a workflow.

An Unstatus Workflow

Another option is to create a workflow that removes the status. This gives you the ability to provide the functionality to non-DBA users, perhaps after having passed certain validations by rule handlers or by human review and approval. An unstatusing workflow also leaves a record in the database of who ran it, and if necessary who approved it, etc.

Remove Status with set-status DELETE

Creating an unstatusing workflow is easy. Just add the set-status action-handler, with the argument DELETE, to a task in the workflow. If you want to delete only certain statuses add -f=status_name to the argument list. If you want to delete all statuses then use DELETE by itself. The handler is normally attached to the complete action of a task, but you can attach it anywhere.

The DELETE option has been available for a while but it wasn’t always documented. It used to be only mentioned in a IR that you could look up in the GTAC database. It is documented in TC 8.3, however.

Was this helpful? Please let me know!

The post How to Remove Status with a Workflow Process appeared first on The PLM Dojo.

Enhancement Request: An External Dependencies BMIDE View

$
0
0

I know I should lay off the BMIDE, but I have another complaint suggestion.

As you probably know a BMIDE data model has lots of dependencies and most of those dependencies exist within the data model itself. For example, a property may have a naming rule attached, and that naming rule may depend on a List of Values (LOV). The property, naming rule, and LOV all exist within the data model.

But data models also contain external dependencies. These are dependencies on things that aren’t in the data model. They are expected to be defined in the Teamcenter instance before the data model is deployed. And those are the problem.

External Dependency Examples

Groups can be an external dependency for Type Display rules which determine which groups can create which item types. The type display rules are in the data model but they refer to groups in the organization structure which are not. And if you try to deploy a template to a Teamcenter instance which doesn’t have those groups defined you will get an error.

PLMXML Transfer Modes can also be an external dependency. I’m not even all that sure what in the data model refers to transfer modes. But I know it’s an issue because the other day we spent a lot of time dealing with a template that failed to deploy because a transfer mode was missing. The template had been extracted from an upgraded Teamcenter Engineering instance which had a lot of accumulated junk in it, the dependency on this transfer mode being one such item. We had to hand-edit the XML of the data model to make it refer to a generic out-of-the-box transfer mode that we knew existed.

External Dependency BMIDE View

My suggestion is to add a new view to the BMIDE that shows all the external dependencies in one place. I think they should be organized by type — groups, transfer modes, etc. Each external dependency should also indicate where the dependency is defined, such as “DojoDesignItem – Type Display Rules”. If that listing can also be linked to the actual definition so we can click on it and go to where it is defined, that would be great.

Armed with this information we could check that the target systems already have the necessary groups, transfer modes, etc. defined or we could make sure we’ve added commands to the install scripts to create the necessary definitions.

Feasibility

I could be wrong but I don’t think there’s anything particularly challenging about this idea, other than just the time it takes to implement. I believe that the list of all possible dependencies is well defined. So all the BMIDE would have to do would be to scan for the types of XML elements which may have external dependencies and then collect any references that it finds.

Enhancement Request?

Is this an enhancement we should try to get Siemens PLM to implement? Do you have a better suggestion? If you just want to show your support, click the Google +1 button.

The post Enhancement Request: An External Dependencies BMIDE View appeared first on The PLM Dojo.

10 Teamcenter Preferences You Should Know

$
0
0

There are many Teamcenter preferences that you can customize. Knowing which preference to change can give you tremendous control over the behavior and appearance of Teamcenter. Listed below are some of the preferences which I find most useful to configure. For the full documentation on each preference, refer to the Preferences and Environment Variables Reference. You can download the PDF from GTAC.

So, here are some of my favorite preferences:

    Teamcenter Preferences for Identifiers and Naming Rules

    These preferences affect how IDs and names are assigned and used.

  1. ASSIGNED_ITEM_ID_MODIFIABLE

  2. ASSIGNED_ITEM_REV_MODIFIABLE

  3. These two preferences determine whether or not you can edit the item or revision ID provided by the Assign button.

  4. NR_BYPASS

  5. Creating naming rules to enforce proper naming conventions is a Very Good Idea. However, sometimes you may need to create an item that violates the naming rules — often when dealing with legacy data. Using this preference you can allow DBA users to bypass the naming rules. One warning: if you use naming rules to convert IDs to all upper or lowercase, NR_BYPASS will bypass that behavior too. Any DBA user that’s used to the system correcting his case might be surprised.

  6. ITEM_first_rev_id

  7. As you might guess, this preference changes which rev ID is assigned first.

  8. <Dataset Type>_saveas_pattern

  9. This preference defines the expected pattern that dataset names will follow. It doesn’t enforce the pattern, however when you copy the dataset to a new revision the name will automatically update to incorporate the new Item and Revision IDs if the pattern was matched for the old dataset name.

    For example, if your have item revision 1000/A containing a dataset named 1000-revA and defined your saveas_pattern to be ${ItemID}-rev${RevID}, then upon saving to 2000/B the dataset name would automatically update to 2000-revB.

    Teamcenter Preferences for the User Interface

    These preferences affect the user interface. They control what is and what is not seen by the users.

  10. CR_allow_alternate_procedures

  11. I’ve talked about this preference before. This determines whether or not users can choose from all of the workflow templates when submitting objects to a process, or if they only see the list of templates defined for the specific object type they selected.

  12. com.teamcenter.rac.ui.perspectives.navigatorPerspective.IWantToSection

  13. Teamcenter 8 introducted a “I want to…” menu to the main navigator window that gives you hot links to whichever menu choices you want. For example, you can give them a link to launch the New Process dialog with one click instead of having to navigate to File → New… → Process. Users can configure their own set of links, which are saved by setting this preference. If you want to give all users the same I Want To… dialog, set it up for one user using the GUI dialogs. This will create a user-level preference for that user. Then create a new Site level …IWantToSection preference with the same values as your template user.

  14. QRYColumnsShownPref

  15. You’ve created custom queries specific to your business needs to make your users’ lives easier. But if they don’t see them they’ll never use them. Configure this preference as a site preference listing the queries you want your users to use most often and those queries will show up at the top of the list in the advanced query dialog.

    Teamcenter Preferences for for NX Integration

    These preferences affect your NX sessions.

  16. TC_NX_SavedQueries

  17. If your users are primarily working in NX you can give them access to your favorite queries in NX’s Advanced Teamcenter Search dialog by adding their names to the values of this preferences.

  18. TC_part_types_display_filter

  19. This preference determines which items types are available from the NX File→ New dialog.

    Special Bonus

  20. Make Your Own Preference!

  21. If you’re writing your own Teamcenter customization you can have it check the value of any preference you want, including one you created yourself. It’s a handy way of configuring your customizations. One hint: give your custom preferences a common prefix, much like your template prefixes, to help you quickly find all of your preferences and to avoid collisions with any out-of-the-box preferences Siemens provides.

Which preferences do you find the most useful? Please share them with us in the comments below!

The post 10 Teamcenter Preferences You Should Know appeared first on The PLM Dojo.

Ninja Updates: Make Modifications Without Leaving a Trace

$
0
0

Sometimes you need to make changes to Teamcenter data without leaving any trace that you were there. A simple ITK program could do the update, but it will also change the Last Modified Date and Last Modifying User properties in the process. That is, unless you do the update in ninja mode and leave no trace behind.

When You Need to Be A Ninja

Sometimes there are valid reasons to make an update without updating the last modified date and user. For example, I recently wrote a short program to populate some new attributes with data from another system. Since the core data hasn’t actually changed — it’s only been synchronized with the external system — I didn’t want to change the last modifying user and date fields. Knowing who actually worked on something last, and when, is often useful information to have. It wouldn’t be very useful if all the data were suddenly updated to say that I had been the last person to work on it.

Not the Ninja Way

Now I know that those last-modified attributes are just fields that I can get and set with POM or AOM functions, so my original plan was to do something like:

# Pseudo-code -- not real function names!
 
# store current values
original_user = get_last_modifying_user() 
original_date = get_last_modified_date() 
 
# Updates last modifying user and date:
fill_in_new_attributes()
 
# Restore original values:
set_last_modifying_user(original_user)
set_last_modifying_date(original_date)

But then as I was perusing the documentation for the POM_ library I found a better way…

The Way of the Ninja

Here’s how to turn on Ninja mode:

POM_set_env_info(POM_bypass_attr_update, FALSE, 0, 0, 0, "")

POM_set_env_info can do a lot of things — so read the docs! — but the thing I was interested in was it’s ability to temporarily disable updating last modifying user and last modify dates when instances are saved. And by golly, it worked.

Warning

I don’t think I need to spend any time describing how ninja mode could be horribly misused out of either ignorance or malice. And it can be particularly misused by someone with a DBA account who can turn on bypass mode. So be careful who gets those DBA accounts, ‘kay?

The post Ninja Updates: Make Modifications Without Leaving a Trace appeared first on The PLM Dojo.

3 Lessons Learned while Upgrading Teamcenter to 8.3

$
0
0

About a month ago at work we went through the process of upgrading Teamcenter Engineering 2007 to Teamcenter 8.3 (AKA, Teamcenter Unified). Here are a couple of lessons learnt from mistakes we made:

  1. Account for workflows that were started, but not completed, before the upgrade.

    We had to make some changes to our workflows to accommodate some of the changes we made while upgrading Teamcenter. We tested the new workflow templates extensively, but we never tested what happened to the workflows that were started in Teamcenter Engineering but would be completed in Teamcenter 8.3. Either we should have tested that case, or we should have had a policy that all in-process workflows had to be completed or terminated before the upgrade.

  2. Test while logged in as a non-DBA user (not just while not logged into the dba group).

    We did lots of testing — but most of it was done by users who were members of the dba group, although they weren’t typically logged into it while they did the testing. Well, unfortunately, Teamcenter sometimes treats users a bit differently if they have dba membership even when they’re not currently logged into that group. When we went live we discovered that many of our regular users were having problems that we had never seen during testing. What we should have done in our test system was to log in as one of the “regular” users to do the testing.

  3. Roles are objects that can have object-ACLs.

    This isn’t a generally applicable lesson like the previous two, but it was enough of a surprise to me that I think I should mention it. Somehow a couple of our oldest roles, originaly created in iMan 7 or so, turned out to have had object ACLs attached to them somehow at some point. This caused some really bizarre and surprising problems when we went live. This was actually the main set of errors that we missed because we had been testing as users who who were members of the dba group.

Here’s hoping that these lessons may help you avoid some grief the next time you upgrade Teamcenter!

The post 3 Lessons Learned while Upgrading Teamcenter to 8.3 appeared first on The PLM Dojo.

How To Compare and Sort Revision IDs

$
0
0

One piece of code I find myself writing over and over is a function that compares two revision IDs to decide which one is the higher rev. The revision sequence we use is too complicated for a standard lexicographical comparison to work. I’ve found several poor ways of implementing it. I think I’ve finally found a decent solution. Perhaps it’s something that will help some of you.

When Do I need to Compare and Sort?

Here are a couple of examples of where I’ve had to do a comparison of revision IDs.

1. Checking that revisions are created in order

Our rules for CAD models allows revisions to be skipped — typically when multiple RNs are being incorporated at once. What we don’t allow is for users to go back and create a lower rev than the latest. So, A → C → E is fine, skipping revs B and D, but A → C → B is not okay. Once C has been created the users can’t go back and create rev B.

We prevent this with a custom pre-condition on item rev creation that takes the existing rev IDs, sorts them in order, and then verifies that the proposed rev ID for the newly created revision is greater than the last revision in the sorted list.

It’s true that there is a ITEM_ask_latest_rev() function, but that looks at the creation date to determine what the latest rev is and I don’t entirely trust creation dates. Revs could have been created out of order before we implemented the pre-condition that checks for that or during a poorly handled data migration.

2. During data migrations

The other common reason this comes up is data migrations. When migrating data into TC I want to check if each component I’m importing is newer than what’s currently in Teamcenter or not. If it is, I want to migrate it, if it isn’t, I don’t want to migrate it and I want the migrated assembly to use the latest rev that’s already in TC.

What’s So Hard About that?

It’s difficult because of the revision sequence we’re using. If your sequence is simple then you may not have any problems.

The revision sequence we use is,

  1. Numeric revs (01–99) for preliminary work
  2. Revision “dash” — a literal “-” character, for the initial release.
  3. Single character Alpha revisions, A–Y, for approved changes (note that “Z” is an illegal character for revisions)
  4. Two digit Alpha revisions, AA–YY, for when we run out of single digit alpha revisions

If you did a simple text based comparison of the revision IDs it would almost work, but not quite. Rev “-” compares as less than both the numeric and the alpha revisions, and the one and two digit alpha revs don’t compare correctly:

revs = ["-", "91", "01", "AA", "B"]
sorted(revs) == ["01", "99", "-", "B", "AA"]
# Correct sorting

But what we get is,

sorted(revs) == ["-", "01", "99", "AA", "B"]
# Incorrect sorting
# The ASCII value for '-' is 45 while the ASCII value or '0' is 48
# so '-' comes before '01'.
# "AA" comes before "B", alphabetically.

On top of that, legacy alpha revisions were zero-padded to always have two characters, “0A” instead of “A”. That causes more problems. And then on top of that rev “-” used to be entered into Teamcenter (and nowhere else) as “00″ (please don’t ask why).

So the correct sorting would be,

revs = ["01", "99", "00", "-", "0A", "A", "0B", "B", "Y", "AA", "YY"]
sorted(revs) == ["01", "99", "00", "-", "0A", "A", "0B", "B", "Y", "AA", "YY"]
# Correct sorting

But instead we would get,

sorted(revs) == ['-', '00', '01', '0A', '0B', '99', 'A', 'AA', 'B', 'Y', 'YY']
# Naive and incorrect sorting.

A Revision Comparison and Sorting Recipe

The process I follow can be called, normalize, decorate, sort

1. Normalize

The first step is to get a normalized version of the rev IDs:

string normalize(const string &rev_id);
// normalize("0A") == "A"
// normalize("00") == "-"
// normalize("A") == "A" // unchanged by the normalization

This function converts rev IDs to a common format for the comparisons so we don’t have to insert all sorts of special handling code into our actual comparisons. I’ll leave it to you to figure out the implementation (it’s not terribly interesting — you’ll probably end up using isalpha() and isdigit() a lot).

2. Decorate each rev ID with its rev type

First, define an enum that lists the different types of rev IDs in order:

typedef enum
{
    NUMERIC,
    DASH,
    ALPHA, // one digit alpha, A, B, C, etc.
    DOUBLEALPHA // two digit alpha, AA, AB, etc.
} rev_type_t;

Next, define a function that looks at a rev ID and returns its type:

rev_type_t get_rev_type(const string &normalized_rev);
// get_rev_type("01") == NUMERIC
// get_rev_type("-") == DASH
// get_rev_type("A") == ALPHA
// get_rev_type("AA") == DOUBLEALPHA

Again, I’ll leave the implementation as an exercise.

Finally, combine (decorate) each rev ID with its type:

#include <utility> // std::pair<>, std::make_pair()
 
std::pair<rev_type_t, string> decorate_revid(const string &rev_id)
{
    const string normalized_rev = normalize(rev_id);
    const rev_type_t rev_type = get_rev_type(normalized_rev);
 
    return std::make_pair(rev_type, normalized_rev);
}
// decorate_revid("01") == pair(NUMERIC, "01")
// decorate_revid("-")  == pair(DASH, "-")
// decorate_revid("00") == pair(DASH, "-")  // "00" normalized to "-"
// decorate_revid("A")  == pair(ALPHA, "A")
// decorate_revid("0B") == pair(ALPHA, "B") // "0B" normalized to "B"
// decorate_revid("AA") == pair(DOUBLEALPHA, "AA)

3. Compare and Sort

Now that you’ve decorated the rev IDs you can compare them reliably. Instead of comparing solely by the revision IDs themselves, or even their normalized forms, we’re now comparing the objects of type pair<rev_type_t, string>. The first element of each pair, rev_type_t will ensure that the relative classes of revision IDs compare correctly relative to each other. All Numerics will be less than all Dashes which will be less than all single-digit alphas which will be less than all two-digit alphas. And the normalized revision ID in the second element will ensure that within each type the rev IDs sort correctly.

// given two rev IDs return the greater.
// If they are equivalent, return the first argument
// (preserve weak ordering)
//     max_revid("0A", "A") == "0A"
//     max_revid("A", "0A") == "A"
string max_revid(const string &first_rev, const string &second_rev)
{
    if( decorate_revid(first_rev) >= decorate_revid(second_rev) )
    {
        return first_rev;
    }
    return second_rev;
}

And once you can compare them, you can sort them:

#import <algorithm> // std::sort()
#import <vector>
 
// A comparison function for use by std::sort(). 
// Returns true if the first rev should be sorted before the second, 
//   false otherwise
bool compare_revids(const string &first_rev, const string &second_rev)
{
    return( max_revid(first_rev, second_rev) == second_rev );
}
 
string unsorted_revids[] = {"0A", "99", "AA", "A", "-", "B", "00", "01"}
vector<string> revids(unsorted_revids, unsorted_revids + 8)
 
std::sort(revids.begin(), revids.end(), compare_revids);
// revids now sorted: ["01", "99", "-", "00", "0A", "A", "B", "AA"]

Usage

Now that I have max_revid() and compare_revid() I can do implement my revision ordering pre-condition easily:

  1. Get all current revision IDs for the item
  2. Sort them in order using std::sort() with compare_revids() as the compare function
  3. Compare the last revision in the sorted list of rev IDs to the rev ID for the new revision using max_revid(). If the new rev ID isn’t greater than the current rev, return an error code.

The post How To Compare and Sort Revision IDs appeared first on The PLM Dojo.


A Sneak Peak at Multifield Keys in Teamcenter 10

$
0
0

As I’ve mentioned before, Multi-key identifiers in Teamcenter was a feature that many of us have been waiting for. It was originally rumored to be coming in TC 9, but that didn’t happen. Fortunately, it did make it into Teamcenter 10. Some customers have begun testing it in the pre-production version 10.0. I haven’t had a chance to work with it myself yet, but I’ve been studying the 10.0 documentation to learn what I can about it. The documentation refers to the new feature as, multifield keys. Here is what I know.

What I know about them

The multi field key is defined by setting the new MultiFieldKey business object constant on a particular business object type. That key field definition is inherited by the children of that business object type.

Disclaimer
This is all based on my reading of the documentation. It’s entirely possible that once I get a chance to test it for myself I’ll discover that things don’t work quite the way they were advertised to work.

Multifield Key Definition

Multifield key definitions consist of a domain, which is the business object on which the multi field key is defined, and one or more properties of the business object. So if you defined a multifield key on Parts, Documents, and Designs your keys would be

  • Part{item_id}
  • Document{item_id}
  • Design{item_id}

And the effect would be that you could (finally) use the same Item ID for a Part, Document, and a Design. Which I think is pretty cool. It solves a lot of problems that in the past led a lot of us add otherwise unnecessary prefixes or suffixes to our item IDs to differentiate the IDs for two different types of objects.

But it doesn’t stop there…

Inheritance of multifield keys in TC 10.0

Like any other kind of property, the MultiFieldKey business object constant is inherited by all children of business object on which it was defined. So if you defined a key on the Part business object, and Part had three children, CompanyPart, VendorPart, IndustryPart, all three children would inherit the same key, Part{item_id}, and so the item ID used have to be unique amongst all of the children of Part. If multifield keys had been defined as {item ID, item type}, with item type being the type the current object, then CompanyPart, VendorPart, and IndustryPart could have all used the same identifier.

Works on more than just Items?

One thing I noticed, nowhere in the documentation I read did it specify that the MultiFieldKey business object constant could only be used Items and their subtypes. It consistently refers to adding the property to BusinessObjects rather than Items. So I have to wonder if we cam use multifield key support on things besides items. For example, can we use them on datasets?

My multifield key

The key fields I’m thinking of incorporating into our system are ItemType{item_id, cage_code}.

Have you tried multifield keys yet?

If you’ve actually worked with multi field keys, please let us know in the comments below. Does it work like the documentation says it should? Did you run into any unexpected problems? Did you notice any impact to performance? What properties did you include in your multifield keys?

The post A Sneak Peak at Multifield Keys in Teamcenter 10 appeared first on The PLM Dojo.

Please Join The PLM Dojo on LinkedIn

$
0
0

Hi there, everybody. I’d like to invite you all to join the LinkedIn group for the Dojo. I know a lot of you are on LinkedIn and visit my site because of something I’ve posted in one of the other Teamcenter or PLM groups over there. So I thought I’d try setting a group there myself. My hope is to create a group that’s like what I’ve been trying to do with the Dojo — a place for sharing our learning with the rest of the community. That’s a bit different from most of the groups on LinkedIn that are mostly used as a technical support forum for people who don’t want to use GTAC for some reason. That’s fine, there’s definitely a place for that, but there’s no reason for me to create another group that’s basically identical to any of a half dozen other groups that are already out there. What I’m hoping for here is a group where people come to share the knowledge, experience and wisdom they’ve gained. A Dojo, if you will. A PLM Dojo, even. If that sounds like something you’d like to participate in, come check it out.

Thanks,

Scott

P.S. Personally my favorite social network these days is Google+, but I haven’t found too many other people in this field on there yet. So if you’re on G+, circle me and let me know that you’re coming from the Dojo. Maybe we’ll set something up there too.

The post Please Join The PLM Dojo on LinkedIn appeared first on The PLM Dojo.

4 Steps to Follow When Updating Workflow Templates

$
0
0

Teamcenter’s Workflow Designer interface is horribly bad. Just one of the flaws is that workflow templates do not have proper revisions. I mean, just think of the irony here. Pretty much the primary task of Teamcenter is to allow you to track what changes have been done to a piece of data. And one of the primary tools in Teamcenter is the workflow template. But they didn’t give you the ability to easily keep track of changes to your workflow templates.

But that’s not all, they set it up so that any time you make a change it’s instantly saved to the template. It’s not like most other applications on your desktop where you can say, “Oops! What was I thinking? Time to close without saving!” Even other TC applications give you that ability, Structure Manager and Access Manager to name two. So they make it really easy to make inadvertent, permanent, changes to your workflow template and really hard to figure out what exactly has changed. Lovely.

Unless you’re capable of never making a mistake when working with workflow templates, you need a system that will keep you out of trouble. Here is mine.

My System for Workflow Template Sanity

Let’s say I want to make some changes to my Dojo-Release workflow process. Here is what I would do.
My original Workflow Template

1. Create a new workflow template based on the current template

The basis of my system is that I do not make changes to the current definition of my workflow template. Instead I create a new template, based on the one I want to change, and make my changes to the new template.

Create a new root template

I name my new template {old template name}.DEV.

Add .DEV to the name

2. Make my changes to .DEV and Test Them

Now that I have a duplicate Template that looks just like my original, I make my changes to the duplicate.

Edit the DEV template

3. Rename original template

Once I have my .DEV template doing what I want it to do it’s time to swap the .DEV and production versions. First I rename the production version to something like {template name}.old-{datestamp}, e.g. Dojo-release.old-20120324a.

Rename the original template
Add a datestamp suffix to the original template

Remember that you have to take the original template into Edit Mode in order to rename it.

4. Remove .DEV suffix

Finally I remove the suffix from my .DEV template, which is the updated workflow.

Remove the .DEV suffix

So now the .DEV template replaces my original template.

the .DEV template replaces the old production template

Various and Sundry

Of course all this happens in your test database before you make any changes to production. I mean, you would never ever ever make changes to your production workflows, no matter how trivial, without trying them in test first, right? Yeah, that’s what I thought.

It may not be a bad idea to create a new template based on your updated template as a snapshot of how it originally looked. It’s just a bit of extra protection just in case someone (not you, of course) ever goes and makes a change to the production version of the template without following this procedure.

I generally leave my old versions in the under construction state, just so they’re not available for someone to accidentally use.

I understand there is a way to go back and find your old versions of your templates. From what I saw it was a pain in the butt. And I can never remember how to do it. So I follow this procedure here.

Was this helpful? What are your tips for keeping track of your workflow template changes?

The post 4 Steps to Follow When Updating Workflow Templates appeared first on The PLM Dojo.

3 Master Model Concept Misconceptions

$
0
0

If you’re around NX for very long you’ll hear someone talk about the Master Model concept. It is an important concept. It comes up in the various forums from time to time. Usually when people as, What is the master model concept? There are no shortage of answers offered. But I feel that most of the discussions miss the mark. I think there are a lot of misconceptions about the Master Model Concept. I am going to try to clear some of them up.

Master Model Definition

If I’m going to discuss what people get wrong about the Master Model Concept, I should at least try to provide a definition of what it is. Here is how I define it

Master Model Concept:

A method of separating a CAD model definition from data which is dependent upon the CAD model but does not define the CAD model itself. This separation is achieved by storing the CAD model definition, called a Master Model, within one file and each piece of dependent data within separate files which refer to the Master Model, typically by including it as a component within themselves. Typical examples of dependent data types are drawings, machining tool path definitions, and FEA analyses.

Now, on to the misconceptions.

Master Model Misconceptions

These are the misconceptions that I feel people have.

1. The Master Model Concept is Only for Drawings

Drawings are the first place most of us encounter the MMC. Unfortunatley many people think this is all the MMC is about. In the early days of Unigraphics, there was but one file, and it contained your geometry and your drawing. Then UG gave us the ability to create assemblies. Soon someone figured out that you could the geometry in one file and the drawing in another. And then the word spread that drawings did not have to be embedded into the same file as the model.

But that’s not the only thing you can do use MMC for. You can have a separate file for your manufacturing tool path information too. And another file for your FEA analyses. Any type of work you need to do that is driven by the model but does not influence the model can (and should!) be broken out into a separate file that references the master model.

2. The Master Model Concept is about reducing file sizes

The second misconception I hear is that the main benefit of MMC is that it reduces your file sizes. Drawings and tool paths and FEA meshes can be consume a lot disk space, so by separating them out from the model file you make your models easier to work with.

That is a benefit. I agree with that.

I just don’t think it’s the primary benefit.

I think the primary benefit is that MMC decouples your model file from your drawings, machining files, and analysis models. Decoupling is a word that is used in software engineering fairl regularly. However, if it’s used in terms of CAD modeling, I’ve not heard it. Decoupling means to take a system and divide it into logical self-contained units which share only what they need to share with each other through well defined channels. For example, drawings do not need to know the model tree for the geometry. It makes no difference if that hole was created with a feature or by extruding a sketch. All the drawing needs to see are the faces and the edges. Likewise, the model does not need to know that the drawing even exists.

So here’s another point about MMC:

The flow of information in MMC is from the master model and to the dependent files.

So, why is this A Good Thing?

Protecting model integrity

No, I’m not talking about protecting the reputations of Victoria’s Secret models. I mean that there is nothing you can do to the drawing, tool paths, or FEA analyses that will damage the master model.

If you had everything in one file, could you be sure that you wouldn’t inadvertently make an unintended change to the model when you were only supposed to be updating the drawing?

Parallel work efforts

A second benefit of decoupling the model from the dependent files is that you can now divide the work effort into parallel tracks. One person can be working the model while another starts on the drawing and a third begins to prepare the tool paths.

Separate life cycles

Since the model and drawing and machining and analysis files are separate they can be submitted to different workflows for review and release. The engineering group can review the model, the drafting group can review the drawing, manufacturing, the tool paths, etc.

Independent changes

The different files can be revised independently. A change to a note on the drawing does not require the model to be changed. Nor does a new analyses. And changes to the model that do not affect the other files do not require them to be updated. For example, there’s no reason to update a tool path because someone added a new reference set to the model.

3. In Teamcenter the Dependent Models Must be Manifestations

The standard implementation of Teamcenter puts the master model in a UGMASTER dataset that is attached to a item revision with a Specification relationship. Then, additional datasets for the other models, like the drawing, are put in UGPART datasets that are attached to the same item revision but with a Manifestation relationship. Some people seem to think that this is what MMC has to look like in Teamcenter. But there are other solutions. For example, the drawing could be in its own item revision under a different item. For more on this topic, see this post on different ways of storing drawings in Teamcenter

Closing Thoughts

History?

If anyone knows the history of who first came up with the master model concept, please share. (Paging John Baker…)

Be Consistent

I endorse using the MMC for all files. I know that some people feel that for really simple models that the drawing should be embedded in the model file. Personally, I feel it’s better to just consistently use one approach. If you’re going to use MMC in some cases (and you should), just use it in all cases.

Watch out for the WAVE

Interpart relationships, particularly WAVE relations, can turn the MMC upside down. Watch out for any relationship that makes a master model dependent on something that happens in the derivative file(s).

More?

Have I missed any misconceptions? Do you disagree with my choices? Leave a comment, below.

Was this useful?

If so, please share, and endorse (+1, Like, etc.) on your social network of choice.
If you feel I’m mistaken about or missing something, pleas share your thoughts in the comments.
Thanks!

The post 3 Master Model Concept Misconceptions appeared first on The PLM Dojo.

Where Was The Dojo Last Week?

$
0
0

I was offline for most of last week. So, just what was I doing?

Flamenco Beach, Culebra Island, PR
I think I may have found the future home of the PLM Dojo Educational Center and Retreat



Our accomodations
The PLM Dojo field office


The Tank
The Defenses may need a bit of an upgrade


Yo ho ho and a bottle of…
I was thinking deep, deep, PLM and Teamcenter thoughts the whole time I was there


My Arrival on Culebrita
Oh #$%@!!! I forgot to open the parachute!


Tortuga Beach, Culebrita Island
Man, somebody on this island must need some help with their Teamcenter install.



The post Where Was The Dojo Last Week? appeared first on The PLM Dojo.

How To Build a Compound Property to a Form Inside a Dataset

$
0
0

Before we begin I want to thank Don Knab of SiOMSystems for his assistance in verifying my work for this post. His help was invaluable. Additionally, his colleague at SiOM, Yogesh Fegade, has recently started his own Teamcenter Blog, which I heartily recommend with my highest recommendation. Go check it out.

How to See Your NX CAD Model’s Mass Properties on Your Item Revision

Yesterday I got an email from somebody I took the Teamcenter 2007 Application Administration class with several years ago. He reminded me that I had showed him how to display the mass properties of an NX model on the Item Revision, using compound properties. I thought that it would make a good post to show how to do it for Teamcenter Unified. In trying to redo it in Teamcenter Unified I quickly discovered that I had forgotten most of what I knew and had to figure it out all over again. So I decided it would be even better to show you how I figured out how to do it. So, here we go.

Objectives

Here’s what you should get out of this post.

  1. You’ll learn how to use compound properties to make mass properties of NX CAD models visible on the item revision.
  2. You’ll learn how forms store their properties. It isn’t like how most objects store their properties.
  3. You’ll see examples of how to interrogate the data model to figure out how things are put together.
  4. You’ll likely come away with a better understanding of why I prefer to not use forms for my data model customizations.

What are we trying to do?

A standard ability of MCAD applications, like NX, is the ability to calculate properties such as mass. After all, it’s usually useful to know how much something weighs, right? When we put the data into Teamcenter it would be nice to be able to see those properties in TC, without having to open the model in NX. It turns out that Teamcenter already does store the mass properties. The problem is, they’re sort of hidden.

Where does Teamcenter store mass properties?

If you right-click on a UGMASTER dataset and choose Named References, you’ll see several references, including a UGPartMassPropsForm.
Dojo massprop 03

If open the form, you’ll see all of the mass properties calculated for the model.

Calculated Mass Properties for CAD model

This only works if mass properties were actually generated at some point for the CAD model, from within NX. Otherwise there won’t be a mass properties form

We Need a Compound Property

I suppose that’s better than nothing, but I’d like to see the property listed as a property of the item revision itself. If you recall my previous post covering Teamcenter properties, you’ll know that compound properties let us take a property defined on one object and show that same value on another object, as if it was defined on that second object. The trick here is going to be figuring out the path of relations and object types that we need to traverse to get from an item revision down to a particular property on the mass properties form.

Building a Compound Property

Let’s review how we create compound properties. You start by adding a new compound property to an object type, in this case an Item Revision. You then define a series of segments which link the destination object type to the source object type. Each link is a pair of a relation or a reference type and the type of object to be found on the other end of that relation or reference type. Finally, when you get to the object type that actually has the property in question you add a final segment which references that source property.

Setup: Show the Real Property Names

Set TC_display_real_prop_names=1Before we try to create the compound property, I suggest that we should configure Teamcenter to show the real property names for everything, instead of the display names. I find that real property names are less confusing for this type of thing. We can do this by setting the TC_display_real_prop_names preference to 1 and restarting the client.

Defining the Segments

Starting Point: Item Revision

When defining the compound property, we start with the object to which we’re adding the compound property, in this case Item Revision

  • Item Revision


Segment 1: Item Revision → UGMASTER

UGMASTER is attached to Item Revision by a IMAN_specification relationshipFor the first segment we have to figure out which relationship type connects UGMASTERs to Item Revisions. If you select an item revision containing a UGMASTER dataset and look at the Details tab you’ll see that the UGMASTER is attached with an IMAN_specification relationship.

So that gives us our first segment pair, an IMAN_specification relationship and a UGMASTER object type.

  • Item Revision.IMAN_specification
    • UGMASTER

I’m splitting the segment definition between two lines because that’s how it will appear when you add the segments to the property definition in the BMIDE.


Segment 2: UGMASTER → some sort of form

Properties of the UGMASTER We know that the properties are stored in some sort of form that is a named reference of the UGMASTER. But what is a named reference anyhow? What relationship or property defines the list of named references? And what exactly what type of form is it that stores the mass properties?

Let’s take a look at the properties of the UGMASTER dataset itself (right click, view properties). Halfway down you’ll see a property called ref_list. Hey, that sounds sort of like it may have something to do with named references, doesn’t it? And it appears to have three forms and a .prt file attached to it, just like we saw in the named references view earlier. In fact, if you double click on the forms you’ll find that the second one down is in fact the mass properties form.

So now we know that the first half of this segment is a reference called ref_list.

But what type of object is ref_list pointing to?

  • Item Revision.IMAN_specification
    • UGMASTER.ref_list
      • ???


Segment 2, continued: What is the form type?

Property view of Mass Properties FormOkay, so we know how to get to the mass properties form, but what type of form is it, exactly?

To answer that, instead of opening the form, view its properties (right click, view properties). Yes, viewing properties is different from opening the form.

When you do this you can see the type of form listed in the header. It’s a UGMassPropsForm form. So now we can complete this segment.

Actually, we already knew the form type from when we had looked at the named references of the dataset earlier, but it’s nice to have a second way of confirming our work.

  • Item Revision.IMAN_specification
    • UGMASTER.ref_list
      • UGPartMassPropsForm


We’re almost done, Right? (not so fast, skippy)

Up until now things have been fairly straight forward. Now they get a bit trickier.

We have traced a route from the Item Revision, through the UGMASTER, and now we’ve landed on the UGPartMassPropsForm. Since mass is a property of this form we should be able to apply the final segment now and be done, right?

  • Item Revision.IMAN_specification
    • UGMASTER.ref_list
      • UGPartMassPropsForm.mass WRONG!


In preparing this article we did try that and it didn’t work. The result was zero even though the form did have a mass value. Frankly, I think this is worth reporting a PR to GTAC.

Anyways, let’s take a look at the definition of the UGPartMassPropsForm object in the BMIDE.

Definition of UGMassPropsForm

Forms are Secondary Business Objects

Notice the Storage class is Form. Since the storage class has a different name from the business object, we know that this business object is a secondary business object. This means that this form has the same properties as the base class of Form. If we look at the Form business object we’ll see, unsurprisingly, that it does not have any mass properties defined. So, how from where does the UGPartMassPropsForm business object get its mass properties?

Forms have two Storage Classes

To answer that question, look a bit further down. Notice the field called, Form Storage Class. Its value is a storage class called UGPartMassProps (note that it does not end in …Form). This is actually the class where the properties are stored. For reasons long since forgotten, Forms, as originally implemented in iMan, used a second storage class to store their custom properties. The primary storage class for forms, Form, just defines basic attributes common to every type of form — date created, owner, last modification date, etc. It’s this second storage class we need for our compound property.

(Okay, the rumor I’ve heard was that the second storage class was used for performance reason on the ancient hardware on which iMan originally ran. The second storage classes were direct children of POM_object and so, supposedly, look up was faster).

UGPartMassProps Class

Segment 3: UGMassPropsForm → UGPartMassProps

Form Business Object
So how do we get from the form to the UGPartMassProps class?
Let’s take a look at the properties that storage class Form defines. We’ll sort them by the inherited column since the relevant property is most likely defined at this level. To me, there are two that look interesting, form_file and data_file. However, form_file is a string[32] property, so that can’t be the reference we’re looking for. form_file though, is a reference type of property. So that just might be the right one.

Okay, I’ll cut to the chase. It is the right one. But feel free to try some other property if you think that’s a more likely choice.

So now we have our third segment.

  • Item Revision.IMAN_specification
    • UGMASTER.ref_list
      • UGMassPropsForm.data_file
        • UGPartMassProps


Final Segment: mass properties

Now we can finally finish off the compound property. We just have to select the mass property as the final segment.

  • Item Revision.IMAN_specification
    • UGMASTER.ref_list
      • UGMassPropsForm.data_file
        • UGPartMassProps.mass

Defining the Compound Property

Here’s what the actual compound property definition looks like in the BMIDE.

Compound Property in BMIDE

This compound property will show the mass value from the UGPartMassProps form as a property on the item revision itself. Pretty cool, eh? You can use the the same principle to show any of the other mass properties, or any of the properties stored on any of the other forms which are attached as named references. That’s even cooler.

Recap

Here are some of the key things we have learned:

  • Named references are attached to Datasets by the ref_list property.
  • Forms have two associated storage classes.
  • The storage class that stores the custom form properties is linked to the form by a data_file property
  • Opening a form is different from viewing the form’s properties. The latter displays information about the form object itself. While the former displays the custom properties the form stores about its parent object.
  • The details view can be used to show you the relationship with which one workspace object is attached to another.
  • The preference TC_display_real_prop_names configures the display to show us real property names instead of display names

Further investigation

  • I would love to hear if anyone does something similar with other types of CAD datasets besides NX.
  • You can actually pick Form instead of UGPartMassPropsForm and Dataset instead of UGMASTER and the compound property will (mostly) still work — I wonder if there is a difference in performance.
    • One exception that Don found was that if the Item Revision had both a UGMASTER and a UGPART dataset attached, both of which had a mass property, then the compound property failed to display any value.
  • I’m curious if there are any plans to do away with the secondary storage classes for forms. I suppose that would could introduce big backward compatibility issues though.

If anyone has any insight to share on these or any other related questions, please share them in the comments below. Thank you!

The post How To Build a Compound Property to a Form Inside a Dataset appeared first on The PLM Dojo.

Bottom-Up Release is a Lie

$
0
0

Top-Down vs Bottom-Up ReleaseIn my work in the PLM world (ha! See what I did there?) I often hear that release must be bottom-up. But from what I’ve seen that is rarely what is actually done. In truth, release is usually done top-down. We say that components need to be released before the sub-assemblies they go into, and the sub-assemblies need to be released before the assemblies they go into. But that isn’t necessary. It is often not even desirable.

Releasing Parts Top-Down

Let’s say I’m designing an Airplane. I know the airplane has two jet engines. I know the engines have a turbine. But if it’s early in the design cycle I probably don’t know how many turbine blades will be in the turbine. Regardless, Purchasing wants a bill of material released so they can start the process of ordering parts and material. I cannot hold up the release of the engine until the design is completely finished. For an initial release of the BOM it’s enough to say that the airplane has two engines but the details will be finalized later.

Releasing CAD Designs Top-Down

Later in the design cycle I need to develop an assembly drawing of the engine. For my drawing I need to show how the fan and the compressor and the turbines and the nozzle all go together. But I really don’t need every detail of the contents of each sub-assembly fully defined. For an assembly drawing I only need the exterior dimensions and interface. So my CAD design may still be a work in progress.

Releasing vs. Freezing

What may confuse people is that what we don’t want is for the CAD design used for the drawing to be modifiable. We want a static representation of the turbine, even if it isn’t completely correct yet. Therefore we generally insist that the components go through some sort of process where they are made read-only before the assembly can be released. We might call it releasing the components, but it probably doesn’t involve a formal or full review process — if it did, that would be significant delay in the schedule. Would you really want to hold up release of a top level assembly drawing because some bracket four levels down wasn’t released yet?

We don’t really need bottom-up release. What we want is a bottom-up freezing of the CAD design.

What does this mean in our PLM system?

Multiple Release Statuses

First, it means that you need a way to distinguish between something that has been fully reviewed and approved and something that has been merely frozen so that it can no longer be modified. Different release statuses are the obvious way to do that.

Different revision sequences

Second, you need a revising system that can handle the different release statuses. If your business rules say that the first full release of each part is done at rev and you need to freeze a revision of a CAD model so it can be used in a higher assembly you can’t be freezing rev . So you need some other revision identifier, perhaps numeric revs (01, etc) are used pre-release. Or perhaps you use a baseline style revision, –.01.

Single revision sequence?

It would be interesting to hear if anyone has simply said, revisions start at 001 and run to 999. The first released revision is the first revision with the Released status.

Freezing Part Structure

If you’re releasing a Part BOM it probably isn’t necessary to fully release or freeze the component parts. But you probably do want to freeze the structure. In other words, you may not need to freeze all the attributes of the jet engine, but you do want to freeze the fact that the plane uses two engines.

Separate Parts and Designs

I think it means that is helpful to distinguish between Parts which are what is bought and sold, and Designs which are the representations of those parts, generally as a CAD model. If you have a single record for both then you’re tying the release lifecycle of your Part data to your CAD data. The initial release of the part structure may not contain any CAD data at all. It may be just a BOM. Then again maybe you could have a single record for both but only add CAD models at a later revision.

On the other hand…

If your assemblies are small and your design times short, you might feel that this is all way too complicated for your needs. You can probably do just fine with a single item type that is both the Part and Design and insist on full bottom-up release. You’ll probably have the occasional bottleneck in your process, but the extra overhead may simply not be worth it the rest of the time.

Closing

That’s how I see it anyhow. I’m always a bit leery of making generalizations about how things like this do or should work. I know there’s lots of you out there who come from lots of different industries that are completely different from the industries I’ve worked in. I’m really curious if what I’ve said makes sense for you or not.

Please leave your thoughts on this in the comments. Thank you!

The post Bottom-Up Release is a Lie appeared first on The PLM Dojo.


The Polls are now Open!

$
0
0

Note: New polls added, see below

Hey there, everyone. Just a quick note to point out the polls that are running in the sidebar and also on the new page, Polls, polls, polls. Please take a moment to answer a few.

I’m playing around with a new polling plugin for WordPress (the framework that the Dojo runs on). We’ll see how it works out. Other than being PLM or Teamcenter related, there’s not much connection between the questions. I think it will be interesting to get a sense of how you all are actually using Teamcenter.

I’ll also include the current polls after the jump.

take care,

Scott

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

The post The Polls are now Open! appeared first on The PLM Dojo.

A Trick for Updating a Precise Assembly Quickly

$
0
0

Introduction

Assembly and Component items
Assembly and Component Both at Rev B

There are times when a precise assembly structure is incorrect; the revisions configured aren’t the ones you’re actually using when you’re working on the assembly. This often becomes an issue when it’s time to release the assembly. You want the BVR (BOM View Revision) to accurately reflect the revisions of the components that were actually used when working on the current release. Maybe the assembly was already released with the incorrect BOM and you, as an administrator, need to fix it. Or you might try to fix it in Teamcenter’s Structure Manager by removing the wrong revisions and replacing them with the right ones. But that can take a lot of time and it would be easy to make a mistake. So, what to do?

Precise Rev Rule configuring wrong revision
Precise Rev Rule Configures Rev A of the Component

Here’s a trick that you can try that will let you quickly fix the assembly structure without opening the CAD tool at all or doing a lot of manual work on the assembly structure.

Setup

Here’s what you need in order to use this trick:

  1. A precise-only revision rule. How else are you going to verify that the precise structure is what you want it to be?

    “Latest Working” Rev Rule Shows Correct Revisions
  2. A revision rule that configures the assembly structure to show the revisions you want the precise structure to include.

And that’s it. On to the instructions. (By the way, if you’re confused about how a precise assembly structure can be configured to show revisions other than what is in its precise definition, take a moment to read this illustrated guide to understanding how revision rules work and this post about the differences between precise and imprecise assemblies.)

How to update your precise assembly structure

Here’s what you do:

  1. Send your assembly to Structure Manager
  2. Set the current revision rule to one that configures the correct revisions you want to see in your precise assembly structure
  3. Toggle the structure type to Imprecise
    Toggling between precise and imprecise
    Toggle Precise/Imprecise

     

    Imprecise Structure in Structure Manager
    Structure is now Imprecise
  4. Okay, pay attention now. Listen carefully because this step here is the key to the whole thing. So don’t screw it up. Okay, ready? Here it is: Toggle the structure type back to Precise. Got that? I didn’t go too fast for you, did I?
    After toggling back to precise
    Assembly Toggled Back to Precise

    Let’s recap:

  • Toggle from precise to imprecise.
  • Toggle from imprecise back to precise.
  • Okay, if you made it this far, you’re almost done. Don’t mess it up now! Okay, so now, save the assembly structure. You don’t want to lose all that hard work, do you?
  • Last but not least, set the current revision rule to your Precise-only revision rule to verify that the precise structure has been correctly updated.
  • Precise structure now updated
    Precise Structure Updated

    That’s it. You’re done!

    Explanation

    So, what was that all about? Toggling it once… then toggling it back?

    Precise → Imprecise

    Okay, so you had your precise assembly, but you configured it with a revision rule that configured different revisions. Then you changed the assembly to an imprecise assembly. This means that the BVR doesn’t store specific revisions anymore, only the items. It depends entirely on the revision rules to select which revision to configure. It continues to configure the revisions according to the revision rule you chose earlier.

    Imprecise → Precise

    Then you toggle it back to precise. A precise assembly does store specific revisions. But which revisions should it store? It has no memory of which revisions it was storing before you toggled it to imprecise. So the logical choice is to store the revisions currently being configured by the active revision rule. So, in essence you’ve reset the configuration to match the currently selected rev rule. Pretty cool, eh? Let me know if this is helpful or not in the comments below. Thanks!

    The post A Trick for Updating a Precise Assembly Quickly appeared first on The PLM Dojo.

    Killing Memory Errors with the STL

    $
    0
    0

    Pull up a chair and let me tell you a story from my early days as a professional programmer. It’s about how I screwed up, and what I’ve done since then to make sure that mistake is never repeated.

    I’m going to ramble for a bit but I promise that I’ll get to the point eventually.

    One of my first big tasks as a programmer was to update, in preparation for an upgrade, some iMan and Unigraphics code I inherited. For you younger kids out there, iMan was the predecessor of Teamcenter Engineering and Unigraphics later bought IDEAS and became NX.

    The code was a mess. Of course, every programmer always thinks that code done by someone else is a mess. But this really was. There were single functions that would have taken two dozen sheets of paper to print out — double sided. My favorite was a family of functions that instead of returning values or modifying a parameter via a pointer updated fields in a global array. One function would update element 0, another would update element 1, etc. And then other functions would know which element to read. But that’s besides the point.

    This code was prone to unrepeatable memory crashes. Memory errors are like that. One function would allocate memory, typically for a string or arrays, and then pass the pointers back to their callers. The callers would be responsible for freeing the memory — unless they passed the allocated pointers to their callers who would then have the responsibility, and so on.

    If you’re familiar with this type of code you know that it is error prone. Freed memory is read, allocated memory is allocated again, etc. These types of bugs can be hard to track down. Sometimes there’s a problem in an execution path that’s rarely taken. Or sometimes the pointer will still point to valid data so long as the OS hasn’t seen fit to reuse that space already. Nine times out of ten the code will seem to work fine, and then on the tenth try the OS will actually use that address for something else and the program crashes.

    Whack a MoleI didn’t care much for this type of code, but it would have taken a massive overhaul to make any substantial change. Being new on the job and new to this code base I was reluctant to make too many changes to it. So we tested the code until we found a problem and then I’d hunt it down and try to fix it, and then we’d repeat the process.

    Over and over and over and over. Think, whack-a-mole.

    Eventually we couldn’t produce any more errors so we decided it was finally ready to release.

    So we went ahead with the upgrade. And then… (cue dramatic music) …nothing much happened. The upgrade went about as well as upgrades ever do. There were some snafus here and there, but no show stoppers.

    So a week later I left for my first PLM World users group conference.

    And then all hell broke loose.

    Hitting Bottom

    On the first day of the conference we start getting calls from the home office. The drafting department is at a standstill. No drawings can be released. Programs are screaming about delays and imminent deadlines. Oh, and how’s the conference?
    Not my best week to be sure.

    The Recovery Process

    When I got back to the office I knew I had to do something dramatic. I had tried the whack-a-mole approach for too long. I might have be able to fix the cause of the current complaints but I had no confidence that another bug wouldn’t be uncovered in another week. I couldn’t continue like that.

    I resolved that I would eliminate all of the memory errors from the code once and for all. Doing so required the massive overhaul I had been afraid to undertake earlier. Now I was more afraid of releasing another buggy version of the code.

    I set two goals for my code:

    1. The code had to work correctly.
    2. See goal number one.

    Admitting I Have a Problem

    Fixing this mess was a bit like starting a Twelve Step Progrom.
    The first step was to admit I had a problem:

    I am not smart enough to manage memory myself.

    The first step was to be modest about my programming abilities. Given my recent failure, this wasn’t difficult.

    Many times we programmers want to show how incredible our skills are. So we do wild and crazy things in our code that might eke out some extra bit of performance or optimize memory just a bit more or… or… something. More power to you if you can pull that off, but I’m not that good of a programmer.

    From admitting I wasn’t very good at dealing with memory management came the obvious solution to my problem: I should not manage any memory.

    Okay, fine, but short of hiring an assistant to write my code for me, how do I do that?

    Turning to the Standard Template Library

    After admitting you have a problem, the second big step in twelve-step programs is to turn to a higher power for assistance. For a C programmer who needs to manage memory correctly, that higher power is C++ and the Standard Template Library (STL).

    Now the STL is vast, but there were really only two things I needed to use from it, the std::string class and the std::vector class template.

    char*std::string

    The first major area of my code where I was trying to allocate memory was string manipulation.

    void work_with_c_string(const char* input_c_string)
    {
    	char *c_str = NULL;
    	size_t len = strlen(input_c_string);
    	c_str = (char *)malloc(len + 1);
    	strcpy(c_str, input_c_string);
     
    	// do whatever…
     
    	free(c_str);
    	return;
    }

    Now, simple examples look simple, but real code gets messy in a hurry, especially when the malloc() and free() are in different functions. The std::string class deals with all of that internally though. It will allocate enough memory to store its contents, and free that memory when the string finally goes out of scope.

    void work_with_cpp_string(const &input_cpp_string)
    {
    	// memory allocated and copy made automatically
    	string cpp_string(input_cpp_string); 
     
    	// adding a suffix -- memory resized automatically
    	cpp_string += ".foobar";
     
    	// whatever…
     
    	return; // memory for cpp_string automatically released
    }

    For the record, notice that I did more in three lines of C++ code than I had done in five lines of C code. That adds up.

    Passing std::string to C Functions

    But wait, the ITK libraries expect plain old char* strings as input, right? Fortunately for us, std::string has a member function, c_str() that returns a char* representation of the string.

    	const string value("my new value");
    	AOM_set_value_string(object_tag, "my_property", value.c_str() );

    Accepting allocated char* from the API

    We can’t entirely escape managing memory for C strings. The ITK API has many functions which return a char* which you are then expected to free. My approach to avoiding problems with those strings was bluntly simple.

    1. Initialize a new std::string from the char* string.
    2. Immediately free the char* string.
    3. Do all work with the copy.

    Is this overkill sometimes? Probably, but I’ve found that by not trying to be clever and figure out when it was necessary to do and when it wasn’t I saved myself a lot of trouble later on.

    	char* temp = NULL;
    	AOM_ask_value_string(object_tag, "my_property", &temp);
     
    	// copy char* to std::string
    	const value( temp );
     
    	// free char* string
    	MEM_free( temp );
     
    	// work with std::string…

    dynamic arrays → std::vector<>

    The second big area that accounted for most of my memory management needs was building and using dynamic arrays to store lists of items. Again, the STL has an alternative that will deal with the memory management for you, the std::vector<> class template.

    Like string, vector will automatically allocate enough memory to store its contents, reallocate as new contents are added, and free its memory when it goes out of scope.

    	vector<string> str_vec; // a vector of strings
    	str_vec.push_back("first");
    	str_vec.push_back("second");
    	str_vec.push_back("third");

    Passing vectors to C-functions taking arrays

    Also, like string, vectors can be passed to C functions that expect regular arrays. The format may look a little odd at first. There isn’t a member function, like c_str(), to call. Instead you use the fact that the contents of a vector are guaranteed to be in contiguous memory, just as if they were in an array. The value of a standard array variable is the address of the memory block holding the array’s first element. So to pass a vector as an array you need to pass the memory address of the vector’s first element: &my_vector[0], where my_vector[0] gives you the first element, and then & gives you its address.

    Typically functions that take arrays also need to know how long the array is and take that as separate parameter. You can pass that value by using the length() member function.

    See the example below.

    Accepting arrays from the API

    As with strings, when the API returns an array that I am expected to free I immediately copy it into a vector, free the memory, and then work with the vector.

    Example using vector

    	int *int_array = NULL;
    	int array_size = 0;
     
    	AOM_ask_value_ints(my_object_tag, "my_property", 
                               &array_size, &int_array);
     
    	// initialize vector with contents of array 
            // using the iterator constructor
    	vector<int> int_vector(int_array, 
                                   int_array // pointer arithmetic!
                                   + sizeof(int_array) / sizeof(int) 
                                  );
     
    	// Free array
    	MEM_free(int_array);
     
    	// add a value to the vector
    	int_vector.push_back(999);
     
    	// pass to c-function API
    	AOM_set_value_ints(another_object_tag, "my_property", 
                               int_vector.length(), &int_vector[0]
                              );

    A Note on Performance

    Earlier I said that I decided that above all other things my code had to work correctly.

    The corollary to that was that I did not have the following goals:

    1. To write the fastest possible code.
    2. To write the most memory-efficient code.

    The reasons I bring this up is because some people might complain that the STL isn’t as efficient as pure C. My response is, yeah. So what?

    The typical sorts of things my ITK code does are to check some preconditions when creating an item or perform some task during a workflow task. During a typical day they might be executed a dozen times or less by most users. If the prefect implementation takes half a second to run and my implementation takes three, I may be 600% slower, but honestly, will the user notice much? No, they really won’t. But if my code blows up and kills their session, hoo-boy, I will definitely hear about it.

    Back to My Story

    I decided on change the code to use strings and vectors instead of char* and arrays. I’d change one function at a time, attempt to recompile, and see what other functions couldn’t compile. Then I’d track those functions down and change them, and so on. This went on for a week of late nights, with managers stopping by daily to check on my progress. Can’t we make a quick fix? Frankly, I largely blew them off. I was convinced that the only true solution was what the radical overhaul solution. If they had had anyone else they thought could have fixed the code I’d probably have been out of a job, or at least reassigned elsewhere. Thankfully, they didn’t.

    After a week of this I finally had a new version of the code to test.

    It was put into production.

    The memory errors were gone.

    It has been years now. I’ve found plenty of other things to get wrong in the code, but not memory errors. The memory errors are gone.

    Coda

    A tool designer I once worked with shared a quote with me once that has always stuck with me. I’ll paraphrase his paraphrasing:

    It is one thing to create something that can’t obviously fail.
    It is another to create something that obviously can’t fail.

    – Anonymous

    (If anyone can tell me the original quote and author I’d be very grateful. I have consulted the oracle at Google with no luck.)

    Code that passes allocated memory around between functions and hands off responsibility for freeing the memory will, at best, be code that can’t obviously fail. It will never be code that obviously can’t fail.

    Using the STL puts me closer to writing code that obviously can’t fail.

    Resources

    There are two books that I have found invaluable for learning to write better C++ code.

    The first is Scott Meyers Effective C++, and the second was C++ Coding Standards by Sutter and Alexandrescu (disclosure: affiliate links). I highly recommend both books if you’re going to be doing any C++ programming. One small disclaimer, I actually have an older edition of the Meyer’s book.

    The post Killing Memory Errors with the STL appeared first on The PLM Dojo.

    Git, and why We Need Distributed PLM

    $
    0
    0

    git logoLike any good software developer, I use a source control system daily. But I’ve fallen behind the times. The latest source control paradigm out there is something called a Distributed Version Control System (DVCS). The two main DVCS’s are Git and Mercurial. GitHub, which hosts git projects, seems be getting written up weekly in technology and business publications. I’m playing catch up, but now I understand what the big deal is. The PLM world needs to take notice. We need Distributed PLM systems.

    Here’s why.

    What is Distributed Version Control?

    Before we talk about PLM software, we need to talk about software source control. Just hang in there if you’re not a developer yourself. I’ll get back to PLM in a bit.

    I Was Blind…

    I had heard of Git a few years ago, but the point of it had eluded me. Something about everyone having a full copy of the repository. Say wha…? Whatever. I’ll just stick with Subversion, thank you. After all, that’s a modern system. It’s so much better than CVS, or so I hear.

    I started to realize that I had missed the boat when I attended the 2012 Global Day of Code Retreat, (which, by the way, was an awesome event — I highly recommend it). Sitting elbow to elbow with some really sharp professional programmers, I kept hearing, Git this…, and, Git that…,. In fact, the organizers of the event had recommended that everyone have Git a repository on their laptops for the retreat.

    The next clue was when I installed Aptana Studio on my laptop to work on a little python project of mine. Guess what, it came preconfigured to work with a Git repository. So I set one up for myself to work with, got a free GitHub account, and used Git for the first time.

    But I still didn’t get what the big deal was.

    …But Now I See

    Recently I was checking out Joel Spolsky’s blog, Joel On Software. If you’re a software developer, you need to be following Joel’s work. Even if you haven’t heard of him, you’re probably familiar with some of his work. Among (many) other things, he’s one of the cofounders of Stack Overflow. And if you’re someone who employees software developers, you really need to be reading Joel. In particular, go read what he has to say about desk chairs and private offices. Please. Do it now, I’ll wait. I have to go refill my coffee anyhow.

    Okay, everyone back now? Cool. Let’s get back on track.

    So Joel has a recent post about how he’s come to realize that distributed version control is superior to centralized version control

    In order to explain to the rest of us why he’s become a DVCS convert, Joel put together a tutorial on Mercurial with a special Re-Education section for those of us familiar with Subversion.

    Joel on Subversion

    Here’s what Joel has to say about Subversion:

    Now, here’s how Subversion works:

    • When you check new code in, everybody else gets it.

    Since all new code that you write has bugs, you have a choice.

    • You can check in buggy code and drive everyone else crazy, or
    • You can avoid checking it in until it’s fully debugged.

    Subversion always gives you this horrible dilemma. Either the repository is full of bugs because it includes new code that was just written, or new code that was just written is not in the repository.

    Subversion team members often go days or weeks without checking anything in… All this fear about checkins means people write code for weeks and weeks without the benefit of version control

    Why have version control if you can’t use it?

    Joel Spolsky, Subversion Re-education

    Centralized Version Control, Illustrated

    Here’s how Joel illustrates life with a centralized Subversion Repository:
    Subversion Repository

    Everyone has a local working copy of the code base which they periodically synchronize with the master version on the server. Or not.

    Joel on Mercurial

    Now we start to get to the what I had missed regarding Distributed Version Control Systems. True, every user has a local repository. But there’s still a central repository. Users check work into their local repository while they’re developing, and then merge their changes into the central repository.

    Distributed Version Control, Illustrated

    It looks like this:

    NewImage

    I’ll let Joel explain what this means.

    So you can commit your code to your private repository, and get all the benefit of version control, whenever you like. Every time you reach a logical point where your code is a little bit better, you can commit it.

    Once it’s solid, and you’re willing to let other people use your new code, you push your changes from your repository to a central repository that everyone else pulls from, and they finally see your code. When it’s ready.

    Mercurial separates the act of committing new code from the act of inflicting it on everybody else.

    And that means that you can commit (hg com) without anyone else getting your changes. When you’ve got a bunch of changes that you like that are stable and all is well, you push them (hg push) to the main repository.

    ibid

    The Problem with Centralized PLM

    That our current PLM systems follow the centralized data model shouldn’t be a surprise or controversial. That’s just how it is. The question is, why is that a problem? After all software development is completely different from designing Airplanes and Automobiles, right?

    No.

    Our PLM users are facing the same d*** problems that software developers face.

    Worse than that, not only are current PLM systems not as good as a Git or a Mercurial, they’re not even as good as Subversion.

    So what’s wrong with centralized PLM? Recall the primary problem with Subversion that Joel highlighted, When you check new code in, everybody else gets it.

    Since most of use use PLM to manage CAD data, let’s look at how that plays out for CAD.

    Option A: Check in bad designs

    I hope that it’s uncontroversial that designs aren’t perfect before they’re finished — if then! If you’re an NX user in a Teamcenter environment, every time you save your work you’re checking in a new change to the central repository. Congratulations, you’ve just polluted the system with your junk (sheesh, that’s sounds dirty). Oh sure, we have statuses and workflows and revision rules to make sure that other users don’t see your junk unless they want to (that doesn’t sound any better) but that stuff is hard to understand. Just last week someone made the comment that, I’ve run into very few engineering Organizations that understand precise/imprecise and Revision Rules. In fact, my post on understanding revision rules is the one of the most popular posts on this site.

    Option B: Avoid check in

    The other option is to avoid checking in your work until you’re sure it’s ready. While this isn’t an option for NX, most of Teamcenter’s CAD integrations allow this behavior. Typically, CAD integrations copy files from Teamcenter down to a local working directory from which the CAD application works with the files. The central Teamcenter repository is not updated until the user manually checks in their work… which could be days, if not weeks, later.

    So, what exactly is the benefit to thd user of using a PLM system?

    The Promise of Distributed PLM

    Do you see now that we have the same problems with PLM software that Joel was describing with centralized source control systems? So let’s imagine that we’re living in a future world where we have a distributed PLM system. And robot butlers and flying cars. Not that they’re relevant, but they would be so damn cool.

    I am not talking about Classic or Global Multisite here. In order to get close to what I mean by Distributed PLM here every single user would have to have at least one personal instance of TC that was multi-sited back to the central site. That may be theoretically possible, but that would be a very heavyweight, and cumbersome, implementation. I suspect that a more usable implementation would maintain only the delta between what a user had checked into his or her own private repo and the central repository.


    So imagine that you’re a CAD user and in addition to the central repository that you’re used to you have a private repository. Now when you save your NX model or check in your ProE model you’re checking into your own personal repository. The main repository knows nothing of your work until push your changes to it. We’re not putting unfinished work out where other users can find it but we still have the benefits of version control.

    Let’s noodle what that means. For starters, revision rules become a lot less important.

    #ifdef vs. Revision Rules

    While running down the shortcomings of Subversion, Joel brought up the topic of branching and merging (which I’ll get to shortly myself) and how it doesn’t work very well in Subversion.

    [A]lmost every Subversion team told me…they swore off branches. And now what they do is this: each new feature is in a big #ifdef block. So they can work in one single trunk, while customers never see the new code until it’s debugged, and frankly, that’s ridiculous.

    Keeping stable and dev code separate is precisely what source code control is supposed to let you do.

    ibid

    Good lord, what an ugly way to write code.

    #if TC_VERSION < 8
    int foobar(tag_t rev)
    {
     	// imlementation for TcEngineering
    	...
    }
    #elif TC_VERSION < 9.0
    int foobar(tag_t rev)
    {
    	// implementation for TC 8.x
    	...
    }
    #elif TC_VERSION < 10.0
    int foobar(tag_t rev)
    {
    	// implementation for TC 9.x
    	...
    }
    #else
    int foobar(tag_t rev)
    {
    	// implementation for TC 10+
        ...
    }
    #endif

    Egads. Thank God we don’t have to deal with that mess in Teamcenter, right?

    Wrong.

    We do the same exact thing. We just use revision rules instead of #ifdef.

    Don’t believe me? Pretend that foobar was an item instead of a function.

    • Foobar
      • Foobar/01 (Frozen)
      • Foobar/02 (Frozen)
      • Foobar/-.01 (Manufacturing Preview)
      • Foobar/A (Released)
      • Foobar/B (Released)
      • Foobar/C (Unstatused, owner=Scott)
      • Foobar/D (Unstatused, owner=Joel)

    Tell me that this isn’t basically how we select which revision to load in an assembly.

    #if RevisionRule == "Precise"
    LOAD(foobar/01)
     
    #elif RevisionRule == "Latest Frozen"
    LOAD(foobar/02)
     
    #elif RevisionRule == "Latest Manufacturing Preview"
    LOAD(foobar/-.01)
     
    #elif Revision Rule == "Latest Released"
    LOAD(foobar/B)
     
    #elif RevisionRule == "Latest Working, current user is owner"
    LOAD(foobar/C)
     
    #elif RevisionRule == "Latest Working"
    LOAD(foobar/D)
     
    #endif

    Holy crap, we have done the same thing that the Subversion users ended up doing. We’ve put everything into the “trunk” of the central repository and then we have a bunch of complicated rules which none of the users really understand in order to figure out which version of the model we should be seeing at any given time.

    And this brings me to the other point I wanted to make about what’s missing from PLM. The Subversion users ended up with a crappy #ifdef code implementation because branching and merging in Subversion doesn’t work very well.

    We ended up with a complicated set of release statuses and revision rules because we never had the opportunity to branch our designs. Teamcenter just doesn’t support it. I hear that Windchill now offers a branching capability that they adopted from PTC’s older IntraLINK product. If any other PLM systems support branching, I’d love to hear more about it.

    Branching and Merging

    Now we get to why I said earlier that what we have now in PLM software isn’t even as good as what Subversion users have. Despite its problems, Subversion does have the ability to create an independent code branch for development and then merge that back into the trunk. Teamcenter forces us to just put all of our changes directly into the trunk.

    Let’s return to our future world of robot butlers, flying cars, and Distributed PLM. And let’s stipulate that in this world we can branch our designs. If I want to propose a change I don’t create a new revision of the model, I create an independent branch of the design that only I can see. When I look at my branch I see the same things that everyone else sees except for the things I’m changing. But no one else sees it unless I share my branch with them. My branch could change a single model, or it could change an entire assembly. I do my work in that branch. When I want to submit the proposal for review I share my branch for review. Only if it’s approved do I merge my updates back into the central “trunk” of the repository, making them available for all. If my proposal is rejected, I just… do nothing. My branch can sit there forever for all I care. It’s not hurting anybody. But if the Powers That Be finally realize that my proposal was right, then it’s there, ready to be revived. Think about how much cleaner that is than having a everything that’s ever been attempted, accepted, and rejected living forever under the central item.

    I won’t get into why Joel says that branching and merging is better under Mercurial than under Subversion, but it is interesting. (Briefly: Subversion tracks versions, Mercurial tracks changes.)

    This is a Big Deal

    If you haven’t figured it out by now, I think this is a big deal. We tend to think of PLM and Source Control as being separate worlds, but they’re really dealing with very similar problems. But while source control systems have been evolving, the central core of how PLM works seems to have stagnated a decade or more ago. I imagine that PLM vendors, always looking or a new feature to sell to a new customer (or use to retain an existing one) aren’t spending a lot of time rethinking the fundamental model of version control they’re built upon. Look! Shiny object!

    It’s time PLM starts to adopt some of the capabilities source control systems are providing. This won’t be an incremental improvement, We’ve redesigned the interface to reduce the number of mouse clicks a typical user makes in a day by 5%! No, this will be huge.

    In closing, Joel Spolsky compares Subversion and Mercurial by saying,

    If you are using Subversion, stop it. Just stop. Subversion = Leeches. Mercurial and Git = Antibiotics. We have better technology now.

    Joel Spolsky, Distributed Version Control is here to stay, baby

    Not only are our PLM systems not yet on the level of Antibiotics. Without support for branching and merging, they’re not even on the level of leeches. I’m not sure what quack medicine was considered state-of-the-art before leeches came into vogue, but that’s about where we’re at. Goat sacrifice maybe. And we’re the goats.

    I’m really hoping we’ll see Distributed PLM in the future. As a Teamcenter guy, I hope Teamcenter implements it first. If not, Windchill or Aras or one of the others that I can’t think of right now might just use this to gain a market advantage — and more power to them if they do.

    What do you think?

    So what do you all think? Am I onto something here? Or do you figure that I must be on something? I don’t pretend to think that this wouldn’t be hard to implement. But I think it would be worth it.

    I’m sure there are problems I’ve overlooked, I’m also sure there are ways to leverage branching, merging, and local repositories that I haven’t considered. Please share both in the comments below.

    Lastly, if you liked this post, your +1′s, likes, and shares help to get the word out to the rest of the world and will be very much appreciated. Thank you!

    The post Git, and why We Need Distributed PLM appeared first on The PLM Dojo.

    Don’t Ditch Your Home-Grown PLM

    $
    0
    0

    YourCompanysApp Does this situation sound familiar? Your company has an in-house system for doing some portion of what is now part of Teamcenter (or Windchill or Aras or…). The Powers That Be have decreed that Teamcenter will replace the homegrown software. Everyone hates the old system. It old and complicated and ugly. When it was new it was state of the last decade’s art. Everyone, from management to users, wants to know how soon it can be replaced and decommissioned.

    Well, I’ve been pondering this question myself lately. I think I have the answer you should give them. I say, tell them that you will never get rid of the old system.

    Here’s why — and what I think you should do instead.

    Old, Ugly, and Battle Tested

    Do you remember Netscape Navigator? Nowadays there are probably more than a couple of you who don’t, but once upon a time it was the internet web browser. And now it’s gone. Writing back in 2000, Joel Spolsky gave an analysis of what had gone wrong:

    [Netscape made] the single worst strategic mistake that any software company can make:

    They decided to rewrite the code from scratch…

    The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive.

    Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

    When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. All those years of programming work.

    Joel Spolsky, Things You Should Never Do, Part I

    Completely replacing your old, in-house, system with Teamcenter runs the same risks as rewriting your application. That old system does something, right? In order to replace it you’ll have to either eliminate or replace each task it currently performs. And some of those tasks probably don’t map directly to the new software, do they? Don’t get me wrong; I hate hearing, “the new system must look and act just like the old system.” This is the time to rethink your old assumptions and to review your current pain points. But I’m betting that there’s still going to be something that can’t be eliminated or replaced. You’ll have to customize Teamcenter to make it do something that the old system is already doing. And that’s where you can get in trouble.

    I’m saying Teamcenter in this post, cuz, well, this is a Teamcenter focused blog and that’s what I know. But I think that this could apply to any system you’re moving to — PLM or otherwise. So if that’s the case, just mentally edit what follows to replace Teamcenter with your software of choice.

    Don’t Eliminate, Integrate

    What do I propose instead? My proposal is a phased approach using as Service Oriented Architecture (SOA):

    1. Expose legacy data and functionality using SOA

      First, provide services so Teamcenter can access the data and functionality of your old system. The services can be web services, CORBA, XMLRPC, or whatever. If you’re familiar with the Model-View-Controller paradigm, you’re providing a controller layer that Teamcenter will use to get to your old system.

    2. Replicate Data

      Next, use the services to make data in the old system visible in the new system. At this stage the data is still authored, edited, validated, and stored in the legacy system, which remains the system of record. If you want to push data updates from the old system you may need to add SOA services to Teamcenter that you can use to push the data.

    3. Enable Data Manipulation in Teamcenter

      Next, using your services to allow users to author and edit data in Teamcenter that will then be pushed to the legacy system. The legacy system is still the system of record and any data validations are still done there. Teamcenter is basically still operating as an interface layer. Now you can start thinking of having users only use Teamcenter as an interface to the data.

    4. Invoke Data Validations From Teamcenter

      This may be the hardest step. You may opt to skip it. By now users are interacting with the data as if it was owned by Teamcenter even though it really lives in the legacy system. The validations happen when the data is entered into the it. Our next goal is to stop storing data in the old system. To do that we need to enable Teamcenter to invoke the validations directly. This requires that the validations themselves be available as a service, which may mean detangling them from the data they validate. Ideally validations are implemented as pure functions, with no dependencies on state nor side effects. Things are often not ideal, however. So some refactoring and redesign may be required.

    5. Stop storing data in the legacy system

      If the last step was successful we can stop entering data into both systems. If you missed something, you’ll find out now. Now Teamcenter is the system of record and the legacy system is merely a service that validates data.

    Final thoughts

    So that’s my idea anyhow — keep the old logic around, but transmorgify it into a service that performs its old functions in service to Teamcenter (or whatever you’re using). That way you don’t have to recreate all those years of accumulated effort from scratch.

    I would very much appreciate hearing your thoughts, especially your criticisms and suggestions. I learn as much as anyone from the discussions.

    If you like this post, your +1s, likes, tweets, and shares are all appreciated and bring the Dojo one step closer to achieving world domination. (Just kidding, I’ll settle for a continent or two).

    The post Don’t Ditch Your Home-Grown PLM appeared first on The PLM Dojo.

    Viewing all 28 articles
    Browse latest View live