Its often useful to check the size of the column before you post text to it. For varchar columns, use Len(). For text, use Datalength().
So: SELECT LEN(mytextfield) AS varcharsize
or
SELECT DATALENGTH(mytextfield) AS textsize
Of course this only gives you the length of what's been put in there and not its max ALLOWABLE size. To do that, use a query like the following:
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.yourtablename'), 'yourcolumnname', 'precision') AS colLength
This can be really, really useful when you are posting notes back to your database, for example, to make sure you do not overflow the allowed size. Here I check against my NOTES table.
checklen := SqlExecute("SELECT COLUMNPROPERTY(OBJECT_ID('dbo.NOTES'), 'NOTES', 'precision') AS col
:IF val(checklen[1,1]) < texttoupdate;
//commit the text data
:ENDIF;
Personally, I just split the data into chunks allowed by the column and enter it into the table. I use a management field as the id to track the split note so it appears as a single note to the user and to other scripts that need an unique value.
Friday, December 21, 2012
Thursday, December 20, 2012
Simple Server Script Thoughts
Server scripts are handy things.
We use them frequently, often for any type of interaction between the data and the UI that you can think of and toss into starlims. Need to commit some new information to your database tables? Run a server script with some SQL insert/update statements. Pull information? The same, just with Select statements.
Server scripts are wonderful and handy but require a little care and feeding when they are set up. People tend to fall into two groups when they use them: either set them up as single activity, e.g. the purpose of the server script is to handle a single update/select/insert/create or a singular activity: a couple of SQL commands centered around processing data to come to a conclusion --like performing checks based on provided data and then performing an update or series of updates; or to build complex multifunctional scripts, often with DoProc() statements to handle process flow.
Neither approach is good/bad or even better than the other: it really depends on your programming goal and philosophy. I can't say I've noticed any performance impacts taking one approach over the other, though I do find the highly complex server scripts to be harder to code review and debug.
So, in the path of building your server scripts, let me put out a few things for you to keep in mind.
arrCols := sqlExecute("SELECT * FROM sys.columns WHERE object_id = object_id('dbo.Tablename')");
That will give you the list of columns in the table. Don't forget to use your table's name and not the placeholder I put in there.
If you think nullable might be an issue for a particular column, you could query to find out:
arrIsNullable := SqlExecute("select IS_NULLABLE from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA='dbo' AND TABLE_NAME='Tablename' and COLUMN_NAME='Columnname'");
Obviously you need to plug in your table and column.
Also, remember when I mentioned you might considering using a flexible server script and passing columns by parameters? Keep in mind that only 15% of an application's life is spent in development and the rest is in maintenance. That means you need to make your job easier since the code you right today is going to be around plaguing you for years to come. Using parameters instead of passing columns and values means when you update your table with something new, you only need to adjust code in one area instead of re-writing in multiple ones.
We use them frequently, often for any type of interaction between the data and the UI that you can think of and toss into starlims. Need to commit some new information to your database tables? Run a server script with some SQL insert/update statements. Pull information? The same, just with Select statements.
Server scripts are wonderful and handy but require a little care and feeding when they are set up. People tend to fall into two groups when they use them: either set them up as single activity, e.g. the purpose of the server script is to handle a single update/select/insert/create or a singular activity: a couple of SQL commands centered around processing data to come to a conclusion --like performing checks based on provided data and then performing an update or series of updates; or to build complex multifunctional scripts, often with DoProc() statements to handle process flow.
Neither approach is good/bad or even better than the other: it really depends on your programming goal and philosophy. I can't say I've noticed any performance impacts taking one approach over the other, though I do find the highly complex server scripts to be harder to code review and debug.
So, in the path of building your server scripts, let me put out a few things for you to keep in mind.
Don't make them brittle
Changes in your tables can break even the most well-written scripts. Even small changes have large impacts, like setting a column to suddenly be non-nullable or to have a different value. You can code for some of these potential issues by making great use of parameters. In fact, for simple updates or data pulls via select statements, I highly suggest you create a flexible server script which accepts an array of column names, an array of values, and a tablename. Having a script like this in hand allows you to utilize it to handle any single table and juggle updates, selects or insert statements. You can code to read the values in the arrays of each and then build the SQL statement appropriately. If changes to the tables are a concern, use some preliminary scripting to check:arrCols := sqlExecute("SELECT * FROM sys.columns WHERE object_id = object_id('dbo.Tablename')");
That will give you the list of columns in the table. Don't forget to use your table's name and not the placeholder I put in there.
If you think nullable might be an issue for a particular column, you could query to find out:
arrIsNullable := SqlExecute("select IS_NULLABLE from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA='dbo' AND TABLE_NAME='Tablename' and COLUMN_NAME='Columnname'");
Obviously you need to plug in your table and column.
Think Security
It doesn't show up out of nowhere. Even though you are programing in system and not wide open to the world, you need to keep in mind what your server scripts are updating, especially if you have layers of admins who are potentially coding. It may not be in your best interest to let everyone code to access the underlying tables. You could enforce some protection by creating views that contain the columns you only want exposed and have those accessed instead. Also, resist concatenating things together. It can leave you open to SQL injection attacks. Use parameters instead.Write Maintainable Code
Focus on writing code that will age well and be understandable a year after you crafted it. Worry less about about performance and focus on well factored code. If performance is a requirement or a problem, analyze your code to find the trouble spots and optimize them.Also, remember when I mentioned you might considering using a flexible server script and passing columns by parameters? Keep in mind that only 15% of an application's life is spent in development and the rest is in maintenance. That means you need to make your job easier since the code you right today is going to be around plaguing you for years to come. Using parameters instead of passing columns and values means when you update your table with something new, you only need to adjust code in one area instead of re-writing in multiple ones.
Wednesday, December 19, 2012
Difference between the debugger and player
In truth I use both of these though probably the player the most. CTRL + F5 is your friend! Of course, so is F5. Anyway, I've outlined a couple of the reasons for why you would need to use one over the other and why you would want to use both.
Debugger
- The ability to set breakpoints cannot be overstated. Seeing how the program flows and executes allows you to make good decisions. Nothing new there: its the basics of any good IDE.
- The ability to see variables as they execute in the local space. Those two things alone would make it worth it. This lets you know the values you expect are actually in place.
- While not obvious, the debugger gives you different error messages than the player.
Player
- It allows you to quick run a form and doesn't require the same permissions as a person who needs to run the debugger.
- You can't set breakpoints but effective user of alerts, dialogs and other feedback can give you a lot of information.
- Its actually more effective in the short run to use it, especially if you are coding for something that is graphic intensive or requires exactness in the user interface. Timewise you can often bring something up in the player, check it and get an answer before the debugger even loads up. That time savings is invaluable.
Tuesday, December 18, 2012
Coding to handling upgrades among versions
Okay, so as a point, its important to remember that you are coding, as a developer in the same code space that the Starlims developers are coding in. That means, that we need to prepare carefully.
Why?
Well, thinking about it in the light I cast above should tell you that when they (starlims) releases a new update its going to overwrite anything you've put in the same space when you apply that update. Its prime among the causes of why I segregate code from their namespaces whenever possible so as to protect it as much as possible.
So, when I start coding inside a LIMS I define a new area (appropriately named to the LIMS or the place I'm developing in) and build, import or recreate forms, server scripts and data sources as I need them. The same applies to building new tables as well, re-naming and localizing them as needed.
That keeps updates from overwriting them. It also allows you to utilize some good database design. I realize the Starlims folks left a lot from v9 in the database tables to keep backwards compatibility but ouch.
Anyway, the point is build your own space so when you update the application it doesn't overwrite or undo your hard work.
Why?
Well, thinking about it in the light I cast above should tell you that when they (starlims) releases a new update its going to overwrite anything you've put in the same space when you apply that update. Its prime among the causes of why I segregate code from their namespaces whenever possible so as to protect it as much as possible.
So, when I start coding inside a LIMS I define a new area (appropriately named to the LIMS or the place I'm developing in) and build, import or recreate forms, server scripts and data sources as I need them. The same applies to building new tables as well, re-naming and localizing them as needed.
That keeps updates from overwriting them. It also allows you to utilize some good database design. I realize the Starlims folks left a lot from v9 in the database tables to keep backwards compatibility but ouch.
Anyway, the point is build your own space so when you update the application it doesn't overwrite or undo your hard work.
Playing with frames
Frames are terribly versatile. Placing them allow you to build in a strong pattern of re-usability. You use it, of course, via the XFD designer like you would with any .NET component in an IDE.
If fact, its obvious that the starlims folks went heavily out of their way to make it look and feel like a traditional programming interface. Which is nice, of course, since making it look opposite would make it that much harder to code for -- no thanks that.
So, getting back on topic, XFD Frames can do a ton of heavy lifting for you, both in designing reuse and portability. For example, instead of laying out a tab control and then coding for each tab and then having the user jump back and forth between them, you can do the same thing with frames, especially if you need to define some data that needs sharing between the figurative tabs.
For example, I like to keep the design of the LIMS I am working similar to webpages for user ease of learning and use. In doing so, I tend to frame the left side into a console, like the one present in the default system. When I don't, I put it above to form a menu of navigation.
The other frame I use to revolve information. It becomes handy when you need to update or adjust links on the fly or to keep track of information handily. While starlims provides form and navigator variables for such things, I find that putting defined variables there is pretty handy and much more accessible across multiple forms when I'm revolving them through the frames. You can access them by concatenating the frame's name + form + variable. Which, if you are re-using linklabels or buttons in your navigation, writing and re-writing dynamic events (probably the best use of the sender, event args duo I have found) based on the form in the other frame becomes very useful.
Referencing is done like follows:
var Base_Form : Form = form.Parent.ParentForm.Id;
If fact, its obvious that the starlims folks went heavily out of their way to make it look and feel like a traditional programming interface. Which is nice, of course, since making it look opposite would make it that much harder to code for -- no thanks that.
So, getting back on topic, XFD Frames can do a ton of heavy lifting for you, both in designing reuse and portability. For example, instead of laying out a tab control and then coding for each tab and then having the user jump back and forth between them, you can do the same thing with frames, especially if you need to define some data that needs sharing between the figurative tabs.
Side frame | BODY FRAME |
For example, I like to keep the design of the LIMS I am working similar to webpages for user ease of learning and use. In doing so, I tend to frame the left side into a console, like the one present in the default system. When I don't, I put it above to form a menu of navigation.
The other frame I use to revolve information. It becomes handy when you need to update or adjust links on the fly or to keep track of information handily. While starlims provides form and navigator variables for such things, I find that putting defined variables there is pretty handy and much more accessible across multiple forms when I'm revolving them through the frames. You can access them by concatenating the frame's name + form + variable. Which, if you are re-using linklabels or buttons in your navigation, writing and re-writing dynamic events (probably the best use of the sender, event args duo I have found) based on the form in the other frame becomes very useful.
Referencing is done like follows:
var Base_Form : Form = form.Parent.ParentForm.Id;
- id is the name of the base form
- ParentForm is the form that contains the frame (i.e., its parent)
- Parent is the frame
- form is the actual form
User / Unit Association
Even though the UI will allow you to delete a site/lab- DON'T.
Actually removing a lab is a big pain in the culo, but if you do manage to remove all the associations to the lab, you can delete it. Your users will have to be associated with another lab AND unit. If not, you will still be allowed to delete and close the UI. You won't be able to get back in, but ...
So to fix the issue, go to the DB backend. The tables that you will be concerned with are SERVGANALYST, SERVGRPTYPES, SERVEGRP, BUILDINGS, LOCATIONS (lab) and ROOMS. On the up side, making mistakes gives one the opportunity to get intimately familiar with the application and database.
Actually removing a lab is a big pain in the culo, but if you do manage to remove all the associations to the lab, you can delete it. Your users will have to be associated with another lab AND unit. If not, you will still be allowed to delete and close the UI. You won't be able to get back in, but ...
So to fix the issue, go to the DB backend. The tables that you will be concerned with are SERVGANALYST, SERVGRPTYPES, SERVEGRP, BUILDINGS, LOCATIONS (lab) and ROOMS. On the up side, making mistakes gives one the opportunity to get intimately familiar with the application and database.
Wednesday, March 21, 2012
log in issues
Okay, I'm still not clear on why they happened to implement user restriction in the manner they did. Still scratching my head. I swear they sold us a development version instead of a production one.
Okay.
Deep breath. Whining done.
So, my co-developer and I are working on bringing starlims10 up to a running state. We've been in its guts, turning things on and off, writing new code, etc. So, we are finally ready to showcase a little of what we have been working on and we pull a few people in to test it out. As they log in, we quickly find that they start getting an error message about UTC issues.
That started some head scratching.
After some digging (global search works nicely!) we tracked it down. Its was in the user session and connection procedures ran during the log in process. The way it was written was...different. It called itself, which is no problem: overloading is a natural programming function. It didn't pass the right parameters, though, on the second call...which may have been why it was breaking: hard to tell. A few modification, however, and we were in business. Issue solved.
So if you start getting UTC error messages start looking in the user session and log in code. You'll probably find your answer.
Okay.
Deep breath. Whining done.
So, my co-developer and I are working on bringing starlims10 up to a running state. We've been in its guts, turning things on and off, writing new code, etc. So, we are finally ready to showcase a little of what we have been working on and we pull a few people in to test it out. As they log in, we quickly find that they start getting an error message about UTC issues.
That started some head scratching.
After some digging (global search works nicely!) we tracked it down. Its was in the user session and connection procedures ran during the log in process. The way it was written was...different. It called itself, which is no problem: overloading is a natural programming function. It didn't pass the right parameters, though, on the second call...which may have been why it was breaking: hard to tell. A few modification, however, and we were in business. Issue solved.
So if you start getting UTC error messages start looking in the user session and log in code. You'll probably find your answer.
Subscribe to:
Posts (Atom)