PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 2nd of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:02] <Havalx> !
[00:43:40] <MacWinner> is it possible for 2 different perl processes on the same server to generate the same MongoID? multicore machine
[00:44:13] <MacWinner> i mean ObjectId
[00:46:05] <Havalx> !
[00:47:46] <Havalx> References for Gen10's technology? I'ma read the documentation in a few moments.
[00:48:16] <Havalx> Any users here prefer sqlite3 over mongodb?
[00:51:12] <Boomtime> MacWinner: independent processes should not generate the same ObjectID according to the spec
[00:51:37] <kopasetik> Say i have a Mongo collection for Items and another collection for Categories. How do I reference a category (one category can have many items) in an item document?
[00:52:02] <cheeser> list the category IDs in your item docs
[00:52:02] <Boomtime> of the 12 bytes in ObjectID, 2 of them are intended to be a process ID that seperates processes on the same machine
[00:55:46] <kopasetik> @cheeser, basically embed as a sub-document?
[00:56:31] <cheeser> no. list the IDs in an array
[00:57:21] <MacWinner> cool, thank you!
[01:05:47] <kopasetik> @cheeser, thx
[01:36:41] <Havalx> with mongodb, is it possible to query/search the db for specific values and when conditions are met, return the object id/select the entire document given the id?
[01:37:44] <Havalx> I'm sure, yeah. I'ma read more documentation and memorize accepted commands.
[01:49:51] <bonhoeffer> how would i return the first element from a collection?
[01:50:05] <cheeser> db.collection.findOne()
[01:50:21] <cheeser> or
[01:50:27] <cheeser> db.collection.find().limit(1)
[01:50:44] <bonhoeffer> thanks!
[01:51:19] <bonhoeffer> ok, what if i wanted the keys from the result?
[01:57:51] <cheeser> you'd have the whole document...
[01:59:47] <bonhoeffer> i’m playing with variety
[02:04:40] <bonhoeffer> mongo handles dates well, correct? not just as strings.
[02:04:54] <cheeser> ISODate
[02:05:14] <bonhoeffer> thanks
[02:05:27] <cheeser> the types mongo supports: http://bsonspec.org/spec.html
[02:07:18] <bonhoeffer> thanks!
[02:09:32] <cheeser> np
[04:26:52] <Havalx> I'm using find_one() with a constructed JSON object.
[04:27:24] <Havalx> But it only returns the first entry in the db.
[04:27:59] <Boomtime> well, it found... one
[04:28:05] <Havalx> Yeah.
[04:28:40] <Havalx> Which it does; but I passed it values to search.
[04:28:50] <Havalx> Boomtime: Which it does; but I passed it values to search.
[04:28:54] <Boomtime> ok, so the issue is that it found the wrong thing?
[04:28:59] <Havalx> Yeah.
[04:29:13] <Boomtime> righto, can you paste your query?
[04:30:29] <Havalx> Boomtime: I fixed it.
[04:30:53] <Havalx> Boomtime: Thanks for the message!
[04:31:39] <Havalx> Boomtime: I'm update it in a min.
[04:31:49] <Boomtime> np, glad you found the issue
[04:37:38] <Havalx> Boomtime: I'm generating a document from variables; what is the formal syntax for it's construction using find_one() the docs show {"key": "value"}
[04:38:25] <Havalx> When substituting key, value with python variables I have: d = {x, y}
[04:38:57] <Havalx> Boomtime: does d need to be strings?
[04:39:54] <Havalx> Boomtime: and proper " ' format.
[05:19:20] <Havalx> Boomtime: http://pastebin.com/BtwCYptX
[05:45:26] <adsf> doubt many ppl are awake at this hour! :)
[05:56:50] <polydaic> Should I be using mongodb if I want to store a representation of a table
[05:57:18] <polydaic> or is it silly to use a NoSQL db in this case
[05:59:53] <adsf> probably not really needed :p
[05:59:57] <adsf> what is the purpose?
[06:00:33] <Boomtime> Havalx: what you pasted is pretty short.. can you try it in the mongo shell?
[06:01:10] <Boomtime> a query for {"x":7} will match any document that has a key called "x" with an integral value of 7
[06:01:33] <Boomtime> note that if x is an array in some (or all) documents then each value of the array is considered seperately
[06:01:57] <polydaic> adsf: I'm building something similar to TypeForm
[06:02:07] <polydaic> in usecase
[06:02:38] <adsf> are you replacing the table or just describing it?
[06:03:06] <adsf> Has anyone used a List<WriteModel<Model>> in a bulk update?
[06:03:12] <adsf> bulkWrite sorry, in java
[06:03:24] <polydaic> adsf: I'm describing it in arrays of objects
[06:03:40] <adsf> I have List<Model> but cant quite squeeze it into the WriteModel<Model> list
[06:04:12] <adsf> so you can recreate the database elsewhere?
[06:04:18] <Boomtime> Havalx: can you try your query in the mongo shell?
[06:08:13] <Havalx> Boomtime: It runs.
[06:08:50] <polydaic> asdf yep
[06:08:58] <polydaic> adsf:
[06:09:13] <polydaic> but I'm like 80% done with mongodb
[06:09:13] <Havalx> Boomtime: I passed {"x": 7} into insert.
[06:10:56] <adsf> polydaic: i suppose it could work, i mostlysee people doing it in xml
[06:11:30] <polydaic> wait why would you use xml over *SQL
[06:11:35] <polydaic> *over a db
[06:11:45] <adsf> i mean just to store a representation of the schema of the db :p
[06:12:16] <polydaic> hmmmmm
[06:12:25] <polydaic> adsf: so what you mean is I should store the schema in mongo
[06:12:44] <polydaic> and then update the postgresql db from that
[06:12:48] <adsf> you can store the schema where you want :) Long as youare happy getting it out
[06:13:08] <adsf> you could eport the table schema from postgres as sql
[06:17:53] <polydaic> also
[06:18:00] <polydaic> is mongoose really terrible?
[06:18:05] <polydaic> i've heard bad things about ti
[06:23:35] <gemtastic> It depends on what you're gonna do with it I suppose
[06:23:48] <gemtastic> Right tool for the right task
[06:42:02] <adsf> if i am using the new models in mongo 3, is it best to work on the iterator, or move to something else?
[06:42:12] <adsf> currently im in java and i convert it to a list of models
[07:20:12] <adsf> anyone konw why update_mongo.add(new UpdateOneModel<Mongo_Game>(collection.find(game).first(), game)); wouldthrough filter cannot be null?
[07:20:21] <adsf> i debugged and there is definitely an object there
[08:11:07] <adsf> can never find any examples of updating an existing document from an object queried with a codec :(
[08:31:17] <skiold> morning, what is the default value of :connect for Mongo::Client.new with the ruby driver (v2)?
[08:31:27] <skiold> as in http://api.mongodb.org/ruby/current/Mongo/Client.html#constructor_detailshttp://api.mongodb.org/ruby/current/Mongo/Client.html#constructor_details
[10:25:23] <aps> While trying to start mongos, I get following error:
[10:25:24] <aps> "ERROR: error upgrading config database to v5 :: caused by :: could not write initial changelog entry for upgrade :: caused by :: failed to write to changelog: could not verify config servers were active and reachable before write"
[10:26:29] <aps> I can connect to config members. what is the reason for this error?
[10:33:15] <arussel> I have a list of document {"a": "a"},{"a":"b"}, {"a":"c"} and a map (a -> 1, b -> 2, c -> 3) outside of mongo. Is there a way in an aggregate to have a replaced by 1, b by 2 and c by 3 so the list becomes {"a":1}, {"a":2},{"a":3} ?
[10:59:15] <synthotronic> Am following the instructions at http://docs.mongodb.org/v2.2/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/ to install version 2.2.2, but it says "Package mongo-10gen is obsoleted by mongodb-org, trying to install mongodb-org-2.6.10-1.x86_64 instead", then tries to install 2.6.10-1. Anyone know the best way to actually install 2.2.2?
[11:17:19] <synthotronic> (I just grabbed the packages directly from http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/ and installed with rpm command.)
[11:27:49] <cheeser> why such an old version?
[11:48:55] <adsf> cheeser: never managed to get the bulk write working. So I copped out and updated in a loop
[11:50:18] <adsf> also if i do an update instead of a replace i get a Invalid BSON field name _id :(
[11:59:47] <adsf> /wc 13
[11:59:49] <adsf> fail
[12:40:42] <cheeser> adsf: sorry to hear that. if you post your failed attempt to the mailing list, jeff and I can work it out.
[12:45:47] <adsf> cheeser: just the mongodb-user one?
[12:50:24] <matthavard> Is there a way to run mongo so that it just writes to a local file like sqlite?
[12:51:25] <matthavard> Ah I guess there's this: https://github.com/hamiltop/MongoLiteDB
[12:52:35] <matthavard> Oh wait that's pure ruby
[13:01:02] <StephenLynx> but
[13:01:06] <StephenLynx> mongo just writes to a local file.
[13:01:30] <StephenLynx> pretty much any databases does that.
[13:16:34] <cheeser> adsf: yeah
[13:16:49] <cheeser> matthavard: what are you trying to do?
[13:17:31] <matthavard> cheeser I just want to use mongo for a small app that I just started making. I don't want to have to run the whole server and everything
[13:17:43] <matthavard> Like for the same purposes one would use sqlite
[13:18:29] <cheeser> what is "the whole server" to you?
[13:18:35] <matthavard> StephenLynx: There is a huge difference in the way mongo writes to a file and the way sqlite writes to a file. Let's not be pedantic here
[13:18:35] <cheeser> you want to embed mongo in your app?
[13:19:01] <matthavard> I don't want to have a mongod process running the whole time
[13:19:02] <StephenLynx> I am not being pedantic. That is LITERALLY what happens.
[13:19:13] <StephenLynx> if you don't know how sqlite works, don't call me pedantic.
[13:19:15] <cheeser> mongo isn't really set up to embed in applications.
[13:19:27] <matthavard> cheeser: that is what I am gathering
[13:19:30] <cheeser> but you can start/stop mongo with your app
[13:20:15] <matthavard> think I'll just dump to a json file
[13:20:37] <Derick> you might as well use sqlite then
[13:20:49] <Derick> as that's optimized for local on-disk manipulations
[13:20:59] <Derick> and handles locking a bit better than your app...
[14:51:29] <Absolute0> I have been using postgresql for all my projects. What will I be gaining by switching to mongo. What will I be losing?
[15:20:30] <amit001> Hi all project pymongo is managed by mongoDB?
[15:28:20] <cheeser> yes. http://docs.mongodb.org/ecosystem/drivers/python/
[15:41:10] <proteneer_> my upstart fails
[15:41:14] <proteneer_> it just says fail
[15:41:17] <proteneer_> and i can’t find the fucking log file anywhere
[15:41:21] <proteneer_> as to why its failing
[15:44:30] <ehershey> upstart is a pain
[15:44:41] <ehershey> are you using the ubuntu package?
[15:44:55] <proteneer_> yes
[15:45:34] <proteneer_> /usr/bin/mongod --config /etc/mongod.conf
[15:45:38] <proteneer_> works perfectly fine
[16:20:13] <danijoo> hey, i need to somehow reduce the size of a mongodb that is used for caching. all entries have an updatedAt field. If I set a TTL index, will this immidiatly delete all entries that are expired?
[16:23:50] <cheeser> yes, give or take a minute.
[16:24:02] <cheeser> but it won't necessarily reduce your on disk footprint
[16:24:53] <danijoo> its a 100gb database and i could shrink the entry count to about 25% of that. so i think it will :)
[16:25:25] <cheeser> disk space isn't necessarily released back to the OS
[16:25:44] <danijoo> i was on the wrong docs page. the 3.0 explains how it deletes pretty good
[16:25:50] <danijoo> cheeser, it doesnt? oh..
[16:26:15] <danijoo> can i force it somehow? Any keyword to google?
[16:26:37] <cheeser> http://blog.mongolab.com/2014/01/managing-disk-space-in-mongodb/
[16:27:24] <danijoo> awesome thanks.
[16:27:51] <danijoo> im moving to a new server right now. this is the only time where i can do this maintaince work without risks of production
[16:28:40] <danijoo> so.. the strategy would be to apply the TTL, and then do a db.repairDatabase()
[16:30:23] <cheeser> that's one option
[16:30:36] <cheeser> probably your best one. though that requires downtime.
[16:30:54] <danijoo> thats fine. i already dumped all data from server1 to server2
[16:31:10] <danijoo> i can now do full maintance on the new collection before switching the frontend to use it
[16:38:10] <allisthere2love> Hi, can somebody help me with user authentication in a 3 mongod, 1 arbiter, 3 config servers and 2 mongos cluster?
[16:40:36] <allisthere2love> I've created an application user in the primary mongod, but it does not exists in mongos. For what I understood, I have to create the user in mongos, but this user created in mongos isn't 'synced' to mongod
[16:40:40] <allisthere2love> is it correct?
[16:40:54] <allisthere2love> the users database are independent in mongod and mongos?
[16:41:31] <cheeser> that's correct, i believe
[16:42:54] <danijoo> oh dammit. just realized my updatedAt fields are NumberLong, no dates :/
[16:43:26] <allisthere2love> thanks, Cheeser. I was thinking there was some problem in the fact that I've created the user X in mongos and can't authenticate with it in mongod. But as I was writing, I think I've understood it :)
[17:31:44] <danijoo> is there an easy way to replace a NumerLong field opf every document in a collection with the corresponding date?
[17:47:32] <joannac> danijoo: write a script and do it as a bulk update?
[18:50:38] <greyTEO> is there a limit or recommendation on how many indexes to have?
[18:51:59] <danijoo> i'd say as less as possible and make sure they all fit into your memory
[18:52:30] <greyTEO> that is what I was thinking.
[19:08:15] <greyTEO> im having a problem querying fro documents where a field doesn't exists. From my understanding, $exists does not use an index?
[19:08:51] <greyTEO> since I am not using a sparse index, I tried 'field': null, but still seems to be the same slow query.
[19:10:26] <greyTEO> collection has about 2.5 million records.
[19:24:26] <cheeser> null is not the same as not $exists
[19:30:13] <GothAlice> Thus constructs like {$or: [{published: {$exists: 0}}, {published: null}, {published: {$lt: now}}]}
[19:50:05] <greyTEO> I thought the index stored a null value for documents that did not have the field for the index.
[19:50:18] <greyTEO> when sparse index was not being used.
[19:50:43] <greyTEO> needless to say it didn't work...
[20:11:06] <ngl> Really quick question:
[20:11:16] <ngl> Can I not do this?
[20:11:17] <ngl> unitsCollection.find({serial: /[a-z]/}, 'serial', function(err, units) {
[20:11:49] <ngl> I know I can in mongo, but with a mongoose model?
[20:12:11] <ngl> The regex part is what I'm asking about.
[20:18:19] <ngl> Nevermind. That is not where I'm breaking.
[20:36:36] <tejasmanohar> whats the best way to keep mongodb in sync with stuff published to postgresql?
[20:39:30] <cheeser> use something like flume or sqoop. maybe mongo-connector though i think that tends to be for outbound data.
[20:46:43] <tejasmanohar> ah
[20:46:48] <tejasmanohar> yeah thats all ive seen... outbound data