Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Thursday, March 29, 2012

can we get the deleted data back in SQL 2000

Hi im current using SQL 2000, data in three tables has got deleted on friday,
in the mean time some data is also get added into that tables. after two days
only i recognize the some data are deleted. No i need the deleted data and
also the newly inserted data. what should i do?
Sakthivel
SQL Server Developer
INDIA
Hi
Only if you have BACKUP LOG file issued during this period
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on
> friday,
> in the mean time some data is also get added into that tables. after two
> days
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA
|||http://www.karaszi.com/SQLServer/info_restore_log_several_times.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on friday,
> in the mean time some data is also get added into that tables. after two days
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA
|||Yes, I have the backlog file. Everyday midnight the fullbackup will be taken
and stored.
Sakthivel
SQL Server Developer
INDIA
"Uri Dimant" wrote:

> Hi
> Only if you have BACKUP LOG file issued during this period
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
>
>
|||Is the database in simple or full recovery model?
Are you doing regular transaction log backups?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:9037CCBC-8515-44C0-9B51-64DA9B27F27C@.microsoft.com...[vbcol=seagreen]
> Yes, I have the backlog file. Everyday midnight the fullbackup will be taken
> and stored.
>
> --
> Sakthivel
> SQL Server Developer
> INDIA
>
> "Uri Dimant" wrote:
|||Thanks Tibor Karaszi and Uri
Sakthivel
SQL Server Developer
INDIA
"Tibor Karaszi" wrote:

> http://www.karaszi.com/SQLServer/info_restore_log_several_times.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
>
>

can we get the deleted data back in SQL 2000

Hi im current using SQL 2000, data in three tables has got deleted on friday
,
in the mean time some data is also get added into that tables. after two day
s
only i recognize the some data are deleted. No i need the deleted data and
also the newly inserted data. what should i do?
--
Sakthivel
SQL Server Developer
INDIAHi
Only if you have BACKUP LOG file issued during this period
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on
> friday,
> in the mean time some data is also get added into that tables. after two
> days
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA|||http://www.karaszi.com/SQLServer/in...veral_times.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on frid
ay,
> in the mean time some data is also get added into that tables. after two d
ays
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA|||Yes, I have the backlog file. Everyday midnight the fullbackup will be taken
and stored.
Sakthivel
SQL Server Developer
INDIA
"Uri Dimant" wrote:

> Hi
> Only if you have BACKUP LOG file issued during this period
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
>
>|||Is the database in simple or full recovery model?
Are you doing regular transaction log backups?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:9037CCBC-8515-44C0-9B51-64DA9B27F27C@.microsoft.com...[vbcol=seagreen]
> Yes, I have the backlog file. Everyday midnight the fullbackup will be tak
en
> and stored.
>
> --
> Sakthivel
> SQL Server Developer
> INDIA
>
> "Uri Dimant" wrote:
>|||Thanks Tibor Karaszi and Uri
Sakthivel
SQL Server Developer
INDIA
"Tibor Karaszi" wrote:

> http://www.karaszi.com/SQLServer/in...veral_times.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
>
>

can we get the deleted data back in SQL 2000

Hi im current using SQL 2000, data in three tables has got deleted on friday,
in the mean time some data is also get added into that tables. after two days
only i recognize the some data are deleted. No i need the deleted data and
also the newly inserted data. what should i do?
--
Sakthivel
SQL Server Developer
INDIAHi
Only if you have BACKUP LOG file issued during this period
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on
> friday,
> in the mean time some data is also get added into that tables. after two
> days
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA|||http://www.karaszi.com/SQLServer/info_restore_log_several_times.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> Hi im current using SQL 2000, data in three tables has got deleted on friday,
> in the mean time some data is also get added into that tables. after two days
> only i recognize the some data are deleted. No i need the deleted data and
> also the newly inserted data. what should i do?
> --
> Sakthivel
> SQL Server Developer
> INDIA|||Yes, I have the backlog file. Everyday midnight the fullbackup will be taken
and stored.
Sakthivel
SQL Server Developer
INDIA
"Uri Dimant" wrote:
> Hi
> Only if you have BACKUP LOG file issued during this period
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> > Hi im current using SQL 2000, data in three tables has got deleted on
> > friday,
> > in the mean time some data is also get added into that tables. after two
> > days
> > only i recognize the some data are deleted. No i need the deleted data and
> > also the newly inserted data. what should i do?
> > --
> > Sakthivel
> > SQL Server Developer
> > INDIA
>
>|||Is the database in simple or full recovery model?
Are you doing regular transaction log backups?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
news:9037CCBC-8515-44C0-9B51-64DA9B27F27C@.microsoft.com...
> Yes, I have the backlog file. Everyday midnight the fullbackup will be taken
> and stored.
>
> --
> Sakthivel
> SQL Server Developer
> INDIA
>
> "Uri Dimant" wrote:
>> Hi
>> Only if you have BACKUP LOG file issued during this period
>>
>> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
>> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
>> > Hi im current using SQL 2000, data in three tables has got deleted on
>> > friday,
>> > in the mean time some data is also get added into that tables. after two
>> > days
>> > only i recognize the some data are deleted. No i need the deleted data and
>> > also the newly inserted data. what should i do?
>> > --
>> > Sakthivel
>> > SQL Server Developer
>> > INDIA
>>|||Thanks Tibor Karaszi and Uri
--
Sakthivel
SQL Server Developer
INDIA
"Tibor Karaszi" wrote:
> http://www.karaszi.com/SQLServer/info_restore_log_several_times.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Sakthi" <Sakthi@.discussions.microsoft.com> wrote in message
> news:ABD49489-6EC5-475B-ABB9-6F77B5A86903@.microsoft.com...
> > Hi im current using SQL 2000, data in three tables has got deleted on friday,
> > in the mean time some data is also get added into that tables. after two days
> > only i recognize the some data are deleted. No i need the deleted data and
> > also the newly inserted data. what should i do?
> > --
> > Sakthivel
> > SQL Server Developer
> > INDIA
>
>

Sunday, March 25, 2012

Can we boost up the priority of a procedure

Hi,
I have a stored procedure which works fine under normal load.
But under heavy load on SQL Server, my procedure is taking very long time to
execute.
Is there any way to increase the priority of my procedure so that it get pro
per time slot.
My scenario is something is like this:
My procedure PMain is continously reading a table T1 processing rows one by
one and delete all rows of the table which are processed.
There are set of procedures which are continously inserting rows in table T1
which will be read by procedure PMain.
Procedure PMain is able to process with almost same speed as rows are added
to table T1.
But situation becomes worse when my another application fires a select query
on 20 tables and each table containing 30000 rows. This application takes r
ound about 1 hour to complete and during this time performance of my procedu
re goes down badly which I don't want.
Please suggest me some ways so that I can improve performance of this proced
ure.
Thanks in advance.
PushkarPushkar (pushkartiwari@.gmail.com) writes:
> I have a stored procedure which works fine under normal load. But under
> heavy load on SQL Server, my procedure is taking very long time to
> execute. Is there any way to increase the priority of my procedure so
> that it get proper time slot.
> My scenario is something is like this:
> My procedure PMain is continously reading a table T1 processing rows
> one by one and delete all rows of the table which are processed. There
> are set of procedures which are continously inserting rows in table T1
> which will be read by procedure PMain.
> Procedure PMain is able to process with almost same speed as rows are
> added to table T1.
> But situation becomes worse when my another application fires a select
> query on 20 tables and each table containing 30000 rows. This
> application takes round about 1 hour to complete and during this time
> performance of my procedure goes down badly which I don't want.
No, there is no process priority in SQL Server.
It is not clear to me whether the other application access the table(s)
that PMain works with or some completely unrelated tables. In the former
case, there could be blocking issues, in the latter there is only a case
of competition of resources.
Unfortunately, without knowing what is is Pmain, I can't give any advice.
But it could be the case that Pmain is not optimally written - or does
not have the opitimal indexing to work with. As long as the server is
not loaded, the performance is good anyway.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Tuesday, March 20, 2012

Can Timestamp column value overflow?

Hi,
I am having a table with a timestamp column. The data in this table is
assumed to accumulate over a period of time. On an avarage every second a
row is inserted into this table.
I using time stamp column for two reasons:
- Ensure that the value of timestamp is ever incrementing.
- Time stamp value should be unique.
Issue: Is there any probability of timestamp getting overflow over a period
of time? If there is a overflow what is the behavior of SQL Server in such
cases?
If there can not be overflow, then how SQL Server manages to generate ever
incrementing unique value for 8 bytes size column.
A bigint Identity column gives arithmetic overflow error if value exceeds
the maximum range.
Thanks in advance.
Pushkar> A bigint Identity column gives arithmetic overflow error if value exceeds
> the maximum range.
I don't remember the exact calculation, but I think you have to do 10,000
inserts a second for 122 years or something like that, to exceed the upper
bound of a BIGINT. So, you're probably safe. If you have anything other
than this table in the database, you're going to run out of available disk
space on the planet before you use up all of the unique, ever-incrementing
BIGINT values available to you.
Just my opinion.
A|||Consider using an identity column of datatype 4 byte int.
"Pushkar" <pushkartiwari@.gmail.com> wrote in message
news:%23fMDNZ0UGHA.5044@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I am having a table with a timestamp column. The data in this table is
> assumed to accumulate over a period of time. On an avarage every second a
> row is inserted into this table.
> I using time stamp column for two reasons:
> - Ensure that the value of timestamp is ever incrementing.
> - Time stamp value should be unique.
> Issue: Is there any probability of timestamp getting overflow over a
> period of time? If there is a overflow what is the behavior of SQL Server
> in such cases?
> If there can not be overflow, then how SQL Server manages to generate ever
> incrementing unique value for 8 bytes size column.
> A bigint Identity column gives arithmetic overflow error if value exceeds
> the maximum range.
>
> Thanks in advance.
> Pushkar
>|||> Issue: Is there any probability of timestamp getting overflow over a period of time? If t
here is a
> overflow what is the behavior of SQL Server in such cases?
The way I remember it:
Even when this was internally a 6 byte size (some earlier version), you coul
d do something like 100
transactions per second for over 100 years before overflow. And it is now 8
byte.
And well before such a theoretic overflow would happen, SQL Server would log
an error and not permit
any more updates in the database, would such a (theoretic) overflow occur. T
his I recall reading in
Books Online, some earlier version.
OK, did some calculations. See code below. If you do 1 transaction per secon
d, it will "overflow"
after 146,235,604,338 years. Or, put another way, if you do 1 million transa
ctions per second, it
will overflow after 146,235 years. Say you plan for a life span of 100 years
for the database, you
would have to do 1 billion transactions per second. TSQL calculations:
DECLARE @.noPossibleValues numeric(38,0)
DECLARE @.secondsPerYear int
SET @.noPossibleValues = POWER(CAST(2 AS numeric(38,0)), 62)
SET @.secondsPerYear = 60*60*24*365
SELECT @.noPossibleValues/@.secondsPerYear AS YearsIfOneTransactionPerSecond
SELECT @.noPossibleValues/(CAST(@.secondsPerYear AS bigint) * 1000000) AS
YearsIfMillionTransactionsPerSecond
And here is some old text that I found on the subject:
"Maximum Value
--
Timestamps increase until the maximum value that can be stored in 6
bytes (2**48) is reached (8 bytes, nowadays). When this maximum is reached,
the database
will not permit any more updates.
A 935 warning message is generated when there are only 1,000,000
timestamp values left in the database.
The only way to start over is to copy out all of the data with BCP
and to re-create the database; dumping and restoring will not help.
This is not a major concern because at 100 transactions per second,
2**48 will not wrap for more than 100 years."
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Pushkar" <pushkartiwari@.gmail.com> wrote in message news:%23fMDNZ0UGHA.5044@.TK2MSFTNGP09.p
hx.gbl...
> Hi,
> I am having a table with a timestamp column. The data in this table is ass
umed to accumulate over
> a period of time. On an avarage every second a row is inserted into this
table.
> I using time stamp column for two reasons:
> - Ensure that the value of timestamp is ever incrementing.
> - Time stamp value should be unique.
> Issue: Is there any probability of timestamp getting overflow over a perio
d of time? If there is a
> overflow what is the behavior of SQL Server in such cases?
> If there can not be overflow, then how SQL Server manages to generate ever
incrementing unique
> value for 8 bytes size column.
> A bigint Identity column gives arithmetic overflow error if value exceeds
the maximum range.
>
> Thanks in advance.
> Pushkar
>|||Oops, I did 2**62 instead of 2**64. Means my numbers were on the conservativ
e side... :-)
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message
news:e4gVkw0UGHA.1160@.TK2MSFTNGP09.phx.gbl...
> The way I remember it:
> Even when this was internally a 6 byte size (some earlier version), you co
uld do something like
> 100 transactions per second for over 100 years before overflow. And it is
now 8 byte.
> And well before such a theoretic overflow would happen, SQL Server would l
og an error and not
> permit any more updates in the database, would such a (theoretic) overflow
occur. This I recall
> reading in Books Online, some earlier version.
> OK, did some calculations. See code below. If you do 1 transaction per sec
ond, it will "overflow"
> after 146,235,604,338 years. Or, put another way, if you do 1 million tran
sactions per second, it
> will overflow after 146,235 years. Say you plan for a life span of 100 yea
rs for the database, you
> would have to do 1 billion transactions per second. TSQL calculations:
> DECLARE @.noPossibleValues numeric(38,0)
> DECLARE @.secondsPerYear int
> SET @.noPossibleValues = POWER(CAST(2 AS numeric(38,0)), 62)
> SET @.secondsPerYear = 60*60*24*365
> SELECT @.noPossibleValues/@.secondsPerYear AS YearsIfOneTransactionPerSecond
>
> SELECT @.noPossibleValues/(CAST(@.secondsPerYear AS bigint) * 1000000) AS
> YearsIfMillionTransactionsPerSecond
>
> And here is some old text that I found on the subject:
> "Maximum Value
> --
> Timestamps increase until the maximum value that can be stored in 6
> bytes (2**48) is reached (8 bytes, nowadays). When this maximum is reached
, the database
> will not permit any more updates.
> A 935 warning message is generated when there are only 1,000,000
> timestamp values left in the database.
> The only way to start over is to copy out all of the data with BCP
> and to re-create the database; dumping and restoring will not help.
> This is not a major concern because at 100 transactions per second,
> 2**48 will not wrap for more than 100 years."
>
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Pushkar" <pushkartiwari@.gmail.com> wrote in message
> news:%23fMDNZ0UGHA.5044@.TK2MSFTNGP09.phx.gbl...
>|||Thanks a lot. Now I can safely use it.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:uurqw78UGHA.4660@.tk2msftngp13.phx.gbl...
> Oops, I did 2**62 instead of 2**64. Means my numbers were on the
> conservative side... :-)
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote
> in message news:e4gVkw0UGHA.1160@.TK2MSFTNGP09.phx.gbl...
>sql

Monday, March 19, 2012

Can This Be Accomplished in My Time Dimension in my AS2005 Cube?

In my AS2005 cube, I have a time dimension with the following attributes: year, quarter and month. Also, there is currently no hierarchy by design.

Is there a way to make a custom attribute/time period that reflects the last 12 months (R12M) and work in the same manner such as year, quarter and month?

The time dimension would then have: years, quarter, month and R12M.

As a result, a user can drop a measure such as Sales Amt and then add the R12M attribute from the time dimension and would then receive the total sales amount for the last 12 months.

I hope that made sense. I can and have written a calculated member for R12M Sales, but this R12M view is very common and repeated in our world. As an example, we have R12M Sales, R12M Volume, R12M # of Customers, etc.

Right now I have a calculated member for each one of those metrics, but it would be nice to move the repeatedness of R12M into the time dimension.

Is this possible? If so, how?

Thanks!

If I am understanding your question correctly all you would need to do is to create a calculated member and then place it on one of your attribute hierarchies. Here is an example calculated member that resides on the [Time].[Month] hierarchy and sums up any measure for the last 12 months that have sales data. You can remove the filter if you do not load month members unless there is data.

HTH,

- Steve

//BEGIN Last 12 Sales Months

CREATE MEMBER CURRENTCUBE.[Time].[Month].[All].[Last 12 Sales Months]

AS

Aggregate(

Tail(

Filter([Time].[Month].Members,

([Measures].[Sales Amt]) <> 0),

12)

),

FORMAT_STRING = "Standard",

VISIBLE = 1 ;

//END Last 12 Sales Months

|||Steve,

Thanks for your help. Unfortunately, I'm still fairly new to Analysis Services and the whole cube thing that my understanding is very limited. With that said, assuming your approach will work, how do I add a Calculated Member to a time dimension?

I'm essentially doing all of this in BI Development Studio.

Regards,

Johnny|||Okay, so I think I know how to add the calculated member to the time dimension. I'll let you know how it goes.|||I don't think your suggestion is what I'm looking for, or I don't understand it.

I'd like to add an attirbute to the time dimension that represents the last 12 months, which would then be applied to any measure, whether it's sales or volume.

So a user could drag this "Last 12 Months" attribute from the time dimensoin and then whatever measure is dropped would result in the value for the last 12 months.

I believe your suggestion isn't as flexible and is only possible for sales?

Does that make sense?|||

Have you tried creating the member and adding to your "Month" attribute hierarchy? The reference to [Measures].[Sales Amt] is not needed and was used to demonstrate how you could filter out empty months if they existed in your cube. Here is a revised version that would give you a member on the "Time.Month" hierarchy which could be place on rows or columns for any measure to give you the twelve month total.

CREATE MEMBER CURRENTCUBE.[Time].[Month].[All].[Last 12 Sales Months]

AS

Aggregate(

Tail(

[Time].[Month].Members,

12)

),

FORMAT_STRING = "Standard",

VISIBLE = 1 ;

|||

I have not implemented a time hierarchy, so Month, Quarter, Year and now [R12 Months] are all attributes from my time dimension.

Hopefully, I'm using the correct terminology. The current members for the Month attribute are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and "R12 Months".

If I add both the Year and Month attributes as row fields with Sales as a Data Item, I'm not getting what I want.

For example, for:

Year = 2006, Month = 1, Sales = $10M. "R12 Months" also equals $10M.

In this scenario, I'd want "R12 Months" during 01/2006 to be equal to the total sales for the 12 month period of 01/2005 to 12/2005.

If it was June 2006, I'd want the total sales for the period of June 2005 to May 2006.

Is this possible?

|||

I think I understand what the issue is now.What you need to do is to create a new attribute called “Period Calculations” and then create your rolling 12 month member using the “Period Calculations” dimension. Here are the steps you will need to follow:

1.) Edit your data source view and add a “Named Calculation” to your time dimension called “Period Calculations”. Enter the following string value for the calculation:

‘Current Period’

2.) Edit your Time dimension and drag the “Period Calculations” column you added to the time entity in the data source view to your “Attributes”.

3.) Click on the new “Period Calculations” attribute and then hit “F4” to bring up the properties window. Set the “IsAggregatable” property to “False”.

4.) Process your time dimension and cube.

5.) Add the following to your cube calculations script:

CREATE MEMBER CURRENTCUBE.[Time].[Period Calculations].[Rolling 12]

AS

Aggregate(

[Time].[Month].CurrentMember.Lag(12):[Time].[Month].CurrentMember *

{([Time].[Period Calculations].DefaultMember,

[Time].[Year].[All],

[Time].[Quarter].[All])}),

VISIBLE = 1 ;

This will allow you to display:

2005 10 Rolling 12 (Sum of any measure for 10-2004 through 10-2005)

2005 11 Rolling 12 (Sum of any measure for 11-2004 through 11-2005)

2006 1 Rolling 12 (Sum of any measure for 1-2005 through 1-2006)

HTH,

- Steve

|||

Steve,

Your help has been invaluable! However, it's not returning the exact figures I'm expecting. Unfortunately, I can't figure out what it's really doing. The "Current Period" works just fine, but the "Rolling 12" doesn't.

Unfortunately, since I don't know what it's really doing, I'm not sure what kind of information I can provide that would help you understand what's going on.

Adding just the Year and Month attributes, I get "$115,488,690" for the Year 2005 total. Adding in "Period Calculations", "Rolling 12" never has the value of $115,488,960. In my ideal world, I would expect to see that figure of $115,488,960 in January 2006.

Our Rolling 12 is for the prevoius 12 months -- not including the current month. So for January 2006, the Rolling 12 period for us is January 2005 thru December 2005. If it was September 2006, the Rolling 12 period would be September 2005 thru August 2006.

I assumed with your misunderstanding of our Rolling 12 period, I would see the figure of $115,488,960 under Rolling 12 for December 2005. For what it's worth, the figure I see is higher: $125,075,196.

Any idea what I can do or provide that would help you understand what's going on?

Again, I REALLY appreciate your time and efforts!!!

|||

Well at least I think we are getting closer. First lets look at adjusting the MDX to calculate the 12 months prior to the current month. To accomplish this all we have to do is adjust the "range" specifid in the Aggregate dimension. I am using the "Lag" function which will use the current month to move back through the previous periods, so all that is needed is to set the range to Lag(13):Lag(1) as shown here:

CREATE MEMBER CURRENTCUBE.[Time].[Period Calculations].[Rolling 12]

AS

Aggregate(

[Time].[Month].CurrentMember.Lag(13):[Time].[Month].CurrentMember.Lag(1) *

{([Time].[Period Calculations].DefaultMember,

[Time].[Year].[All],

[Time].[Quarter].[All])}),

VISIBLE = 1 ;

Now there is another topic that we have not discussed that may also be influencing your results. In an earlier post you stated that your Month attribute had the following members: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. I made what may be a bad assumption that your "Key" values for the Month attribute were unique. What I mean by this is that the source data used to populate your month attributes looks like the following:

Key Name

200401 1

200402 2

...

200501 1

200502 2

If this is not the case then the data for each month will sum for all your years and you will not get the results you are looking for. Just another item to check.

HTH,

- Steve

|||

Hi Steve,

I tried the new definition and notice that for January of any year, it will not having the "Rolling 12" available, only "Current Period".

Here's how the underlying Date table looks like that the Time dimension points to:

Date_key | Date | Day | Year | Quarter | Month

2 | 1996-01-01 00:00:00.000 | 1 | 1 | 1 | 1996

3 | 1996-01-02 00:00:00.000 | 2 | 1 | 1| 1996

...

3783 | 2006-01-29 00:00:00.000 | 29 | 1 | 1 | 2006

....

and so on until 12/31/2020.

I'm in my dsv, I used a Named Query to pull data from this Data table where year <= year(getdate()).

So with our structure in our date table, anyway to get the results that I am looking for?

Thanks,

Johnny

|||

Johnny,

There is an important concept that you need to be aware of with regard to "attribute" hierarchies in 2005. It is important that your attribute keys are unique when you are trying to sum up the months for a specific year. The description of your time dimension leads me to believe that you do not have unique keys and the calculation you want will not work without them. There is a great section in one of the ProjectReal write ups that talks about this issue. Check out the following URL:

http://www.microsoft.com/technet/prodtechnol/sql/2005/realastd.mspx#EGAA

and do a search on:

"Best Practice (required): You must always ensure attribute key uniqueness"

HTH,

- Steve

|||

Steve,

Please see my previous post regarding the new behavior of "Rolling 12". I edited the post without realizing you had already replied.

Unfortunately, I don't understand MDX or Analysis Services that well at all.

I'm somewhat confused with my attribute keys not being unique. You're not referring to the date_key right?

This issue that you bring up, would it affect all calculations dependent on the Time dimesnion, or just in the case of Rolling 12?

I used to have a calculated member (still do, but it's not visible) for each Rolling 12 metric: Roling 12 Sales, Rolling 12 Volume, Rolling 12 # of Customers, etc. These all worked and calculated correctly.

I can provide you with the expression if that would help.

Regards,

Johnny

|||

Johnny,

There are many reasons why the unique key issue could cause difficulties. Given the limitations of this method of communication it is hard for me to know exactly what the issue is in your particular case. Feel free to contact me via my direct email and we can work further on this off line if you still need assistance.

- Steve

|||To anyone that may be reading this, I seriously cannot express how much appreciation I have for Steve helping me in this thread and offline. Again, thank you Steve!!!!

Sunday, March 11, 2012

Can stored procs run after handle is closed?

I have written a stored proceedure for MSSQL that needs to run for hours at
a time. I need to execute it from C++ code. The current code does:

nRet = SQLDIRECTEXEC(hstmt, "exec stored_proc", SQL_NTS)

followed shortly after by a

Free_Stmt_Handle(hstmt) //roughly

The stored proc currently dies with the statement handle, not fully
populating the table I need it to.

I need to either know when the proc finishes so I can close the handle after
that, or allow the proc to run independently on the server no matter what
the program is doing (is exited, etc), either of these is fine.

Please Help! Thanks in advance!
JosephI know nothing about C++, but if the proc runs for a very long time, it
might be better to implement it as a scheduled job. The client could
set a flag or insert a row into a 'queue' table, then you have a job
which runs every few minutes or whatever, and if the flag is set, it
then starts the stored proc.

Simon|||That is an interesting approach, ideally I would like to stay as far away
from the database as I can but it sounds like this could be the best way...
my stored procedure is running for the exact same number of instructions and
then dying, whereas if I run it via Query Analyzer it runs to completion.

I finally caved and just copy-pasted from Q.Analyzer into code to confirm
this. I will investigate a little further before taking that plunge.

Thanks
Joseph

"Simon Hayes" <sql@.hayes.ch> wrote in message
news:1112950699.762977.19700@.o13g2000cwo.googlegro ups.com...
>I know nothing about C++, but if the proc runs for a very long time, it
> might be better to implement it as a scheduled job. The client could
> set a flag or insert a row into a 'queue' table, then you have a job
> which runs every few minutes or whatever, and if the flag is set, it
> then starts the stored proc.
> Simon

Wednesday, March 7, 2012

Can SQL search for a .txt document

Good Morning,
I am very new to SQL and I'm haveing a hard time codeing some
automation.
I'd like to look for c:\ftp\prod.txt every 5 minutes and when found,
delete it and run a
DT package that will import other .txt documents.
Thanks
RobertHi Robert,
I wrote something similar up in my blog
(http://sqlblogcasts.com/blogs/tonyr.../05/19/450.aspx).
You can add a linked server into Indexing Service (which must be running to
do this).
EXEC sp_AddLinkedserver 'IDXServer', 'Indexing Service', 'MSIDXS', 'System'
You can now access the File System by doing simple queries, your query would
be something like this...
SELECT *
FROM OPENQUERY( IDXServer,
'SELECT Directory,
FileName,
Size
FROM SCOPE('' "c:\" '')
WHERE FileName LIKE 'prod.txt''
AND Directory = ''c:\ftp'' ')
Tony.
Tony Rogerson
SQL Server MVP
http://sqlblogcasts.com/blogs/tonyrogerson - technical commentary from a SQL
Server Consultant
http://sqlserverfaq.com - free video tutorials
"Robert" <rghutchinson@.geisinger.edu> wrote in message
news:1148997621.064423.148160@.u72g2000cwu.googlegroups.com...
> Good Morning,
> I am very new to SQL and I'm haveing a hard time codeing some
> automation.
> I'd like to look for c:\ftp\prod.txt every 5 minutes and when found,
> delete it and run a
> DT package that will import other .txt documents.
> Thanks
> Robert
>|||Also take a look at these 2 undocumented (so be careful) procedures
master..xp_getfiledetails 'c:\ftp\prod.txt '
master..xp_fileexist 'c:\ftp\prod.txt '
And besides that you can also use DTS with the FSO (File System Object)
Denis the SQL Menace
http://sqlservercode.blogspot.com/|||Thanks so much Tony, I'll try this.

Saturday, February 25, 2012

Can someone please help with this problem, this is my first time at this forum

Need Help in getting started on a SP in SQL SERVER2000
I need help in writing a stored procedure on SQL Server 2000. Basically the stored procedure's primary task is to generate invoice records and insert the records in a invoice table. In order to generate the invoice records, I have an initial table which are basically Day Records. The task that I would like to complete is to select all records from that initial table and I guess put them into a temp table. Now that i have my temp table, I would like to loop thru the table record by record, and do inserts in the invoice table. I cant seem to figure out the syntax since I am somewhat weak in TSQL Programming. I would appreciate any assistance on this forum or to my email. Also If you need some pseudocode for the process or the DDL for the initial table and the invoice table, I can definitely post that in the forum possibly in the next thread. Please advise, I would need a full written syntax to get me started since i have some other processes that I would need to build using the template. Thanks again.Why do you want to loop through the table instead of running a set-based SQL query against the entire dataset? What you are talking about doing is setting up an SQL Cursor which will step through the data one record at a time, but this is an ineffecient method of handling database data, and is only used when there is no other alternative (very rare) or by VB programmers who don't know TSQL (unfortunately very common).
The task you describe is probably very simple for someone with some knowledge of SQL, but if you are trying to wing-it you are heading into trouble. You need to get some expertise, either in the form of a good SQL Programming turorial or a good SQL consultant.

blindman

Friday, February 24, 2012

Can someone help!

Hi All
I need some help. I have a requirement wherein I need to know the time
difference that elapsed between the creation of the defect and the
resolution of the defect.
If there are holidays(Saturday & Sunday) between the time, then it
should not be included.
Is there a way I can do that by using a SQL stored procedure written in
SQL 2005.Any help is highly appreciated.
Thank you very much
Ami.The use of a calendar table would prove to be quite useful for this and
other date related issues. See:
Datetime -Calendar Table
http://www.aspfaq.com/show.asp?id=2519
Datetime -How to count the number of business days
http://www.aspfaq.com/show.asp?id=2453
--
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Ami" <amutha55@.gmail.com> wrote in message
news:1157857041.111774.64780@.m73g2000cwd.googlegroups.com...
> Hi All
> I need some help. I have a requirement wherein I need to know the time
> difference that elapsed between the creation of the defect and the
> resolution of the defect.
> If there are holidays(Saturday & Sunday) between the time, then it
> should not be included.
> Is there a way I can do that by using a SQL stored procedure written in
> SQL 2005.Any help is highly appreciated.
> Thank you very much
> Ami.
>

Can someone explain this message?

Server: Msg 1204, Level 19, State 1, Line 30 The SQL Server cannot obtain a
LOCK resource at this time. Rerun your statement when there are fewer active
users or ask the system administrator to check the SQL Server lock and
memory configuration.
We can't find it in BOL.
Peace & happy computing,
Mike Labosh, MCSD
"When you kill a man, you're a murderer.
Kill many, and you're a conquerer.
Kill them all and you're a god." -- Dave MustaneMike,
Sounds like a lock issue or possibly a memory issue. What does the
following reveal?
exec sp_configure 'locks'
exec sp_configure 'min server memory'
exec sp_configure 'max server memory'
How much memory is available on the system? How much is SQL Server
currently using?
HTH
Jerry
"Mike Labosh" <mlabosh@.hotmail.com> wrote in message
news:uVtXrEgvFHA.3556@.TK2MSFTNGP12.phx.gbl...
> Server: Msg 1204, Level 19, State 1, Line 30 The SQL Server cannot obtain
> a LOCK resource at this time. Rerun your statement when there are fewer
> active users or ask the system administrator to check the SQL Server lock
> and memory configuration.
>
> We can't find it in BOL.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "When you kill a man, you're a murderer.
> Kill many, and you're a conquerer.
> Kill them all and you're a god." -- Dave Mustane
>|||Maybe here?
http://support.microsoft.com/search...SE&ast=2&mode=a
"Mike Labosh" <mlabosh@.hotmail.com> wrote in message
news:uVtXrEgvFHA.3556@.TK2MSFTNGP12.phx.gbl...
> Server: Msg 1204, Level 19, State 1, Line 30 The SQL Server cannot obtain
> a LOCK resource at this time. Rerun your statement when there are fewer
> active users or ask the system administrator to check the SQL Server lock
> and memory configuration.
>
> We can't find it in BOL.
> --
> Peace & happy computing,
> Mike Labosh, MCSD
> "When you kill a man, you're a murderer.
> Kill many, and you're a conquerer.
> Kill them all and you're a god." -- Dave Mustane
>|||> http://support.microsoft.com/search...SE&ast=2&mode=a
I read the article, printed it and showed it to her. Yep, that's exactly
what she's doing.
Loop over SampleSourceArchive table, processing child records. We're
currently in an email-war about "WHY NOT JUST DO A JOIN YOU SILLY USER!!"
Peace & happy computing,
Mike Labosh, MCSD
"When you kill a man, you're a murderer.
Kill many, and you're a conquerer.
Kill them all and you're a god." -- Dave Mustane|||> Sounds like a lock issue or possibly a memory issue. What does the
> following reveal?
> exec sp_configure 'locks'
> exec sp_configure 'min server memory'
> exec sp_configure 'max server memory'
> How much memory is available on the system? How much is SQL Server
> currently using?
Min Locks = 5000
Max Locks = 2 Billion (and some change)
Min Server Memory = 0 -> 2GB
Max Server Memory = 4MB -> 2GB
The server has 3 GB of hard RAM (Can't tell the swap file size from here)
but it looks like SQL Server is set to use anywhere between 0 and 2GB of
memory, if I have interpreted this correctly.
I think she is just applying her "expert" MS Access skills to SQL Server.
Peace & happy computing,
Mike Labosh, MCSD
"When you kill a man, you're a murderer.
Kill many, and you're a conqueror.
Kill them all and you're a god." -- Dave Mustane

can some one please tell me about the server name?

this is my first time to leave messange at here . I wish someone can tell me how to sign in at sql server management studio express CTP s server name .plsssssssssssssssssssssssssssssssssssssssssssssss

This is the SSIS forum. You should try and post the the management tools forum.

Thanks,

Matt

Tuesday, February 14, 2012

Can one confirm this behaviour ?

Say i have a database restored with its data files on say D and E.
When I do a restore again with EM and this time specify the data file paths
to be M and N but at the same time check on the box that states " Force
Restore over existing database", does the database actually try restoring to
the previous D and E and ignore the M and N paths that i specified . Using
SQL 2000
Thanks
Specifying the path will make EM add MOVE options to the RESTORE command. In this case, SQL Server
will not try to access the original path, as specified in the backup header. "Force restore" is only
needed if the database name that you try to restore to already exists and doesn't have the same file
layout as the one you are trying to restore (REPLACE option to the RESTORE command).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Hassan" <fatima_ja@.hotmail.com> wrote in message news:uutzcrqAFHA.936@.TK2MSFTNGP12.phx.gbl...
> Say i have a database restored with its data files on say D and E.
> When I do a restore again with EM and this time specify the data file paths
> to be M and N but at the same time check on the box that states " Force
> Restore over existing database", does the database actually try restoring to
> the previous D and E and ignore the M and N paths that i specified . Using
> SQL 2000
> Thanks
>

Can one confirm this behaviour ?

Say i have a database restored with its data files on say D and E.
When I do a restore again with EM and this time specify the data file paths
to be M and N but at the same time check on the box that states " Force
Restore over existing database", does the database actually try restoring to
the previous D and E and ignore the M and N paths that i specified . Using
SQL 2000
ThanksSpecifying the path will make EM add MOVE options to the RESTORE command. In
this case, SQL Server
will not try to access the original path, as specified in the backup header.
"Force restore" is only
needed if the database name that you try to restore to already exists and do
esn't have the same file
layout as the one you are trying to restore (REPLACE option to the RESTORE c
ommand).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Hassan" <fatima_ja@.hotmail.com> wrote in message news:uutzcrqAFHA.936@.TK2MSFTNGP12.phx.gbl.
.
> Say i have a database restored with its data files on say D and E.
> When I do a restore again with EM and this time specify the data file path
s
> to be M and N but at the same time check on the box that states " Force
> Restore over existing database", does the database actually try restoring
to
> the previous D and E and ignore the M and N paths that i specified . Using
> SQL 2000
> Thanks
>

Can one confirm this behaviour ?

Say i have a database restored with its data files on say D and E.
When I do a restore again with EM and this time specify the data file paths
to be M and N but at the same time check on the box that states " Force
Restore over existing database", does the database actually try restoring to
the previous D and E and ignore the M and N paths that i specified . Using
SQL 2000
ThanksSpecifying the path will make EM add MOVE options to the RESTORE command. In this case, SQL Server
will not try to access the original path, as specified in the backup header. "Force restore" is only
needed if the database name that you try to restore to already exists and doesn't have the same file
layout as the one you are trying to restore (REPLACE option to the RESTORE command).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Hassan" <fatima_ja@.hotmail.com> wrote in message news:uutzcrqAFHA.936@.TK2MSFTNGP12.phx.gbl...
> Say i have a database restored with its data files on say D and E.
> When I do a restore again with EM and this time specify the data file paths
> to be M and N but at the same time check on the box that states " Force
> Restore over existing database", does the database actually try restoring to
> the previous D and E and ignore the M and N paths that i specified . Using
> SQL 2000
> Thanks
>